Brief-details: A comprehensive collection of AI voice models for music creation, featuring 200+ trained voice models from various artists and celebrities, built using SoftVC VITS and RVC technologies.
Brief Details: A 13B parameter multilingual LLM supporting 101 languages. Built on T5 architecture with instruction-tuning capabilities across diverse languages and scripts.
Brief Details: AuraFlow is a state-of-the-art flow-based text-to-image model with Apache 2.0 license, offering high-resolution image generation capabilities.
BRIEF DETAILS: A specialized Stable Diffusion model fine-tuned on Studio Ghibli anime films, offering text-to-image and image-to-image generation with distinct Ghibli-style aesthetics.
Brief Details: 7th_Layer is an AI image generation model optimized for anime-style artwork, featuring specific configurations for DPM++ 2M Karras sampler and CFG scale 7 ±5
Brief-details: Baichuan-13B-Chat is a powerful 13B parameter bilingual LLM optimized for Chinese/English, featuring ALiBi positioning, 4096 context length, and efficient INT4/INT8 quantization capabilities.
Brief Details: OLMo-7B is a 6.89B parameter open language model trained on 2.5T tokens, featuring 32 layers and 4096 hidden size for research advancement
BRIEF DETAILS: 340B parameter multilingual LLM optimized for chat and instruction-following, featuring advanced alignment techniques and synthetic data generation capabilities
Brief Details: DeepSeek-V2.5: A 236B parameter unified model combining general and coding capabilities, featuring BF16 precision and enhanced performance metrics across multiple benchmarks.
Brief Details: A 2.7B parameter code generation model trained on 18 programming languages with SOTA performance and Fill-in-Middle capability. 32.4% Python pass@1.
Brief-details: Advanced 8B parameter Chinese-English LLM built on Llama3, optimized for bilingual dialogue with enhanced capabilities in roleplay, function calling & math. Trained on 100K preference pairs.
Brief Details: A 13B parameter GPT-style language model trained on The Pile dataset, optimized for research and featuring Chinchilla-optimal scaling with 20 tokens per parameter.
Brief Details: Meta's Llama-2-13B optimized for chat, converted to GGML format for CPU/GPU inference. 13B parameters, supports multiple quantization levels.
Brief-details: Stability AI's advanced text-to-3D model that improves upon Zero123, enabling high-quality 3D object generation from single images with enhanced rendering capabilities.
Brief Details: TimesFM is Google's foundation model for time-series forecasting, supporting context lengths up to 512 points with flexible horizon lengths and frequency indicators.
Brief Details: A 2.7B parameter code completion model trained on 20 programming languages, featuring Flash Attention and AliBi embeddings for enhanced performance.
Brief-details: Kolors is an advanced text-to-image diffusion model supporting both Chinese and English, trained on billions of image-text pairs with exceptional photorealistic output quality.
BRIEF DETAILS: Pygmalion-6B: A dialogue-focused fine-tuned version of GPT-J-6B, trained on 56MB of conversation data. Specializes in character-based interactions with 733 likes and 2.6K+ downloads.
Brief Details: NVLM-D-72B is a powerful 79.4B parameter multimodal LLM from NVIDIA that excels at vision-language tasks, achieving SOTA results across multiple benchmarks.
Brief-details: WizardCoder-15B is an open-source code LLM achieving 57.3% pass@1 on HumanEval, built on StarCoder with evolved coding instructions and OpenRAIL-M license.
BRIEF-DETAILS: A 4-bit quantized version of GPT4-X-Alpaca-13B, optimized for CUDA with 128-group size, offering efficient text generation capabilities