Brief Details: Qwen1.5-32B is a powerful 32.5B parameter transformer-based language model with 32K context length support, built for text generation and research applications.
Brief-details: A powerful 7B parameter chat model that fuses knowledge from multiple LLMs, achieving 8.22 on MT-Bench and outperforming many larger models.
BRIEF DETAILS: V-Express is an advanced audio-to-video generation model utilizing stable diffusion technology, supporting text-to-image and audio-driven video synthesis
Brief Details: TeleChat-7B-int8 is an 8-bit quantized Chinese LLM trained on 1.5T tokens, featuring multi-turn dialogue capabilities and long-text generation with strong performance across various benchmarks.
Brief-details: AgentLM-70B is a 69B parameter LLM fine-tuned on agent interaction trajectories, built on Llama-2-chat for enhanced agent capabilities and general language tasks.
Brief Details: CodeShell-7B is a powerful 7B-parameter multilingual code model achieving SOTA performance on HumanEval/MBPP, with 8K context window and IDE plugin support.
Brief Details: A 7B parameter Mistral-based model specialized in converting OCR text to JSON, ideal for processing receipts and invoices with high accuracy.
Brief-details: Applio is an advanced voice conversion tool emphasizing ease of use and high-quality audio transformations, built on VITS/RVC architectures with MIT license and VCTK dataset integration.
Brief-details: PLaMo-13B is a 13B parameter LLaMA-based bilingual model trained on 1.5T tokens (1.32T English, 0.18T Japanese) with 4096 context length.
Brief-details: Nous-Hermes-13B-GGML is a CPU/GPU-optimized model with various quantization options (2-8 bit), built for efficient local inference using llama.cpp framework.
BRIEF-DETAILS: A comprehensive SDXL merge combining 80+ models and LoRAs, focusing on versatile image generation capabilities across multiple styles and domains.
Brief-details: T0 is an 11B parameter encoder-decoder model trained for zero-shot task generalization, outperforming GPT-3 on many tasks while being 16x smaller.
Brief Details: A powerful 12.8B parameter Korean language model by EleutherAI, trained on 863GB of Korean text data with state-of-the-art performance in multiple NLP tasks.
BRIEF DETAILS: FACodec - Advanced speech codec for NaturalSpeech 3 that decomposes audio into content, prosody, timbre, and acoustic details. Supports 16kHz speech with state-of-the-art compression.
Brief-details: Hierarchical audio-driven portrait animation model from Fudan University. Enables realistic facial animation from audio input with ethical considerations built-in.
Brief-details: Qwen2-1.5B is a powerful 1.54B parameter language model featuring SwiGLU activation and group query attention, optimized for multiple languages and code generation.
Brief Details: Turkish-optimized 7B parameter LLM based on Mistral, trained on 5B tokens with DORA/LORA methods for enhanced Turkish language capabilities.
Brief Details: A high-performing 7B parameter Mistral-based model achieving impressive MT Bench scores (8.51) and strong performance across multiple benchmarks.
Brief-details: Rocket-3B: Efficient 3B parameter LLM using DPO training, achieving MT-Bench score of 6.56. Notable for matching larger models' performance despite compact size.
Brief-details: SOLAR-10.7B-Instruct - Powerful 10.7B parameter LLM using Upstage Depth UP Scaling, optimized for single-turn conversations with GGUF quantization options
Brief-details: MythoMax-L2-13B-GGML is a GGML-quantized variant of MythoMax L2 13B, optimized for CPU+GPU inference with various quantization options from 2-bit to 8-bit precision.