Brief-details: A powerful 14B parameter LLM with strong performance across benchmarks like MMLU (67.36% ACC) and CEval (73.10% ACC), featuring multi-language support and GGUF optimization.
Brief-details: A 13B parameter LLaMA2-based language model merging Pygmalion-2 and MythoMax, optimized for roleplay/chat with dual prompting formats
Brief Details: 7B parameter instruction-tuned LLM based on Pythia-6.9b, trained on Databricks' 15k instruction dataset. Open source, commercially viable.
Brief-details: NemoMix-Unleashed-12B is a merged 12.2B parameter language model optimized for RP and storytelling, combining multiple Mistral-based models with enhanced context handling and reduced repetition.
Brief Details: A 70B parameter LLM optimized for tool use and function calling, achieving 90.76% accuracy on BFCL. Built on Llama 3, fine-tuned with DPO.
BRIEF DETAILS: An experimental high-resolution anime-style Stable Diffusion model trained on 5k curated images, featuring 768/1024 base resolution and specialized in detailed character generation.
Brief Details: Experimental 22B parameter dense model derived from MOE compression, featuring math capabilities and conversational abilities. Apache 2.0 licensed.
Brief Details: OuteTTS-0.2-500M: A 500M parameter multilingual text-to-speech model supporting English, Chinese, Japanese, and Korean, built on Qwen-2.5-0.5B with enhanced voice cloning capabilities.
Brief-details: Powerful 70B parameter LLM fine-tuned with DPO, achieving 7.89 MT-Bench score and 95.1% AlpacaEval win rate. Built on Llama-2 for enhanced dialogue.
Brief-details: Chat-optimized Mistral MoE model fine-tuned on SlimOrca dataset using QLoRA, featuring 8 expert models and trained on 6x H100s for enhanced conversational AI capabilities.
Brief-details: A powerful 7B parameter LLM based on Mistral, fine-tuned with DPO achieving strong benchmark results. Optimized for instruction-following and general text generation tasks.
BRIEF-DETAILS: A fine-tuned Stable Diffusion model specialized in generating James Webb Space Telescope-style space imagery using the "JWST" token.
Brief-details: A sophisticated text-to-image model optimized for Asian facial features, combining OpenBra with multiple high-quality base models for enhanced photorealism and stability.
Brief-details: A MIT-licensed Textual Inversion concept that brings Midjourney's distinctive artistic style to Stable Diffusion, enabling high-quality artistic image generation.
Brief Details: Mixtral-8x7B extended to 32k sequence length - Large language model from MistralAI with split file architecture for improved handling.
Brief Details: Anime-style text-to-image diffusion model optimized for kawaii aesthetics. Features high-quality anime character generation with 7.5 CGF scale and 28-step sampling.
BRIEF DETAILS: A 3B parameter code generation model trained on 17 programming languages, featuring Grouped Query Attention and 16K context window, optimized for code completion and generation tasks.
Brief Details: A 2.8B parameter chat model fine-tuned on OASST1 and Dolly2, offering efficient language processing with multiple inference options and Apache 2.0 license.
Brief-details: A versatile LoRA model collection featuring various effects including cotton mouth, skinny adjustments, fire breathing, and unique architectural backgrounds like Japanese public housing.
Brief Details: ArmoRM-Llama3-8B is an 8B parameter reward model using mixture-of-experts for multi-objective optimization, achieving 89.0 on RewardBench.
BRIEF DETAILS: A powerful 7.24B parameter Mistral-based model optimized for fine-tuning tasks, featuring BF16 precision and merged architectures from OpenHermes and MetaMath.