Brief Details: RWKV-6 World is a multi-language text generation model trained on 1.42T tokens across 12 languages, featuring enhanced MMLU performance and Apache 2.0 license.
Brief-details: A comprehensive suite of sparse autoencoders for analyzing Gemma 2 models (2B, 9B, 27B), acting as a "microscope" to examine internal model activations and concepts.
Brief-details: A 7B parameter multilingual language model fine-tuned on xP3 dataset, capable of following instructions in 46 languages with strong zero-shot performance.
Brief-details: Powerful 70B parameter LLM fine-tuned on SlimOrca dataset, achieving 85.09 on EQ-Bench. Features ChatML format and impressive performance across multiple benchmarks.
Brief Details: MeinaMix - A specialized text-to-image diffusion model focused on anime art generation, optimized for quality output with minimal prompting.
Brief-details: A powerful 33B parameter code generation model built on DeepSeek-Coder, achieving 92.7% accuracy on HumanEval with execution feedback and synthetic human feedback.
BRIEF DETAILS: Luna-AI-Llama2-Uncensored: Fine-tuned Llama2 chat model trained on 40k conversations with uncensored responses. Supports both GPU (GPTQ) and CPU (GGML) inference.
Brief-details: TensorRT-optimized version of Stable Diffusion XL 1.0, offering up to 70% performance improvement on H100 GPUs with maintained quality.
Brief-details: A 30B parameter uncensored language model based on WizardLM, trained without alignment constraints, achieving 82.93% on HellaSwag and 56.8% on MMLU benchmarks.
Brief-details: SynthIA-7B-v1.3 is an uncensored Mistral-7B-based language model fine-tuned on Orca datasets, achieving 57.11% average performance on benchmark tasks
Brief-details: An advanced IP-Adapter for FLUX.1-dev model, trained on high-resolution images for enhanced image-to-image generation, with ComfyUI integration and Apache 2.0 license.
BRIEF-DETAILS: DeciDiffusion-v1-0 is an 820M parameter text-to-image model offering 3x faster generation than Stable Diffusion with comparable quality, featuring innovative U-Net-NAS architecture.
Brief Details: A versatile text-to-image model combining multiple artistic styles, optimized for semi-realistic artwork with strong emphasis on detailed faces and dynamic compositions.
Brief-details: BLOOMZ-7B1-MT: A 7.1B parameter multilingual instruction-following model capable of zero-shot task completion across 46 languages with strong cross-lingual generalization abilities.
Brief-details: A powerful 12B parameter language model by NVIDIA & Mistral AI with 128k context window, multilingual capabilities, and FP8 quantization support.
BRIEF-DETAILS: A fine-tuned Stable Diffusion 2.0 model specialized in generating high-quality 3D images at 768x768 resolution using 'redshift style' token
BRIEF DETAILS: Mixtral-8x22B-v0.1: A powerful 141B-parameter sparse Mixture of Experts model supporting 5 languages. Features BF16 precision and Apache 2.0 license.
Brief-details: A fine-tuned MusicGen model specialized in generating song ideas and melody loops, trained on curated Splice samples with stereo audio output at 32khz.
Brief-details: Custom embedding model for Stable Diffusion v2.0 that creates photorealistic objects in transparent display cases, trained on high-quality captioned images for knolling-style presentations.
Brief Details: NVIDIA's 340B parameter multilingual LLM supporting 50+ natural languages and 40+ coding languages with 4096 token context length
Brief Details: An advanced stable diffusion model specialized in anime/manga character generation, featuring improved anatomical stability and composition variation compared to v1.