BRIEF DETAILS: Specialized LoRA model for generating vintage-style advertisements, built on FLUX.1-dev. Features unique trigger phrase and non-commercial licensing.
Brief Details: RealFlux 1.0b Schnell is a specialized AI model focused on realistic image generation, optimized for 4-6 sampling steps with Euler Beta method and specific CFG settings.
Brief-details: Meta's 8B parameter instruction-tuned LLM, optimized for 4-bit quantization running on MLX framework. Features enhanced dialogue capabilities and efficient resource usage.
BRIEF-DETAILS: SmolLM-360M-Instruct is a compact 362M parameter language model optimized for efficient text generation and conversation, featuring BF16 precision and Apache 2.0 license.
Brief-details: Microsoft's WaveCoder-Ultra-6.7B is a powerful code-focused LLM achieving 79.9% on HumanEval, featuring advanced instruction-tuning and multi-task capabilities
Brief Details: InternLM2.5 7B Chat GGUF - Optimized conversational AI model with 7.74B parameters, supporting multi-language capabilities and function calling, available in various quantization formats.
Brief-details: SDXL-based anime model trained on 3.6M images using LyCORIS fine-tuning. Features advanced tag system, quality ratings, and specialized artist style blending capabilities.
Brief-details: LLaVA 1.6 GGUF is a 6.74B parameter image-text-to-text model optimized for efficient inference, supporting advanced visual understanding and text generation
BRIEF-DETAILS: 4-bit quantized version of TeleChat-7B language model, trained on 1.5T tokens, optimized for Chinese text generation and dialogue with strong performance on benchmarks
Brief-details: A state-of-the-art 7B parameter math-focused LLM achieving 83.2% pass@1 on GSM8k and 33.0% on MATH, outperforming larger models including ChatGPT 3.5 and Gemini Pro.
BRIEF-DETAILS: 6.7B parameter coding-focused LLM optimized for source code generation, featuring GGUF quantization for efficient deployment and OSS-Instruct training methodology
BRIEF-DETAILS: Yi-34B-GGUF is a powerful 34.4B parameter LLM optimized for CPU/GPU inference, offering multiple quantization options and strong multilingual capabilities.
Brief Details: A powerful 7B parameter LLM based on Llama2, achieving 87.82% win-rate on AlpacaEval and ranking #1 among 7B models
Brief Details: A 7B parameter bilingual (Japanese/English) chat model based on LLaMA architecture with 32K context length and Apache 2.0 license.
Brief Details: Pygmalion-2-13B: A 13B parameter Llama-2-based model optimized for fiction writing and conversational AI with instruction-tuning capabilities.
Brief-details: A 7B parameter LLaMA2-based uncensored language model trained on Orca-style datasets, optimized for instruction-following and reasoning tasks
BRIEF DETAILS: Taiwan-LLaMa-v1.0 is a 13B parameter LLM specialized in Traditional Chinese, fine-tuned for Taiwan's cultural context with advanced language understanding capabilities.
Brief-details: An 8B parameter LLM based on Llama-3, fine-tuned using Self-Play Preference Optimization over 3 iterations. Shows strong performance on instruction-following tasks with 68.28% accuracy on IFEval.
Brief Details: A 2.7B parameter hybrid SSM-transformer model excelling in instruction-following tasks, outperforming larger models with faster inference and lower latency.
Brief Details: Bilingual (Chinese/English) instruction-tuned 7B parameter LLM using LoRA fine-tuning on Alpaca datasets. Supports text generation with detailed responses.
Brief Details: Text-to-image AI model specialized in high-quality anime-style generation, supporting danbooru tags. Popular with 5,261 downloads and CreativeML OpenRAIL-M license.