Brief-details: A multilingual LLM built on Llama-2, covering 534 languages with expanded vocabulary (260,164 tokens) and LoRA adaptation for enhanced language capabilities.
Brief-details: A 4-bit quantized version of Baichuan2-7B-Chat model, trained on 2.6T tokens, supporting both Chinese and English languages with state-of-the-art performance in its size class.
Brief Details: A powerful 16B parameter code LLM with encoder-decoder architecture, specialized in code understanding and generation across 9 programming languages.
Brief-details: txtai-wikipedia is a specialized embeddings index for English Wikipedia articles, optimized for semantic search and RAG applications using the e5-base model.
BRIEF-DETAILS: Shoujo manga-style AI model optimized for generating anime/manga artwork with different style variations across 90s, 00s, and 10s eras
Brief-details: An anime-style text-to-image diffusion model fine-tuned on Hitokomoru artist's artwork, featuring high-quality character generation with Danbooru tag support.
Brief-details: A specialized Stable Diffusion model fine-tuned for generating high-quality furry art, featuring breed-specific capabilities and optimized for anthropomorphic character generation.
Brief-details: A fine-tuned Whisper small model for Cantonese speech recognition, achieving 7.93% CER without punctuation. Features fast inference and extensive training on diverse Cantonese datasets.
Brief Details: AltDiffusion is a bilingual text-to-image diffusion model supporting both Chinese and English, built on Stable Diffusion with 1.8B parameters and BAAI's AltCLIP technology.
Brief-details: A powerful 13.9B parameter multilingual text-to-text model capable of following instructions in 101 languages, trained on xP3 dataset for diverse tasks.
Brief Details: A specialized text-to-image model trained on Stable Diffusion 1.5 for generating detailed isometric city illustrations with high precision
Brief Details: A 2.7B parameter Japanese language model based on GPT-NeoX architecture, trained on CC-100, Wikipedia, and OSCAR datasets. Optimized for Japanese text generation.
Brief Details: ExVideo-SVD-128f-v1 is an enhanced Stable Video Diffusion model capable of generating extended 128-frame videos, trained on 40K videos using 8x A100 GPUs.
Brief-Details: An 8B parameter LLaMA-3-based model fine-tuned on private data, synthetic instructions & novel data, achieving 71.75% on IFEval
Brief Details: A 2B parameter image-to-video generation model fine-tuned on 10M videos, achieving quality comparable to 5B models. Supports CLI, Gradio, and ComfyUI interfaces.
Brief-details: A 4.15B parameter multimodal LLM combining InternViT-300M vision model with Phi-3-mini LLM, capable of processing images, videos and text with dynamic resolution support.
Brief Details: Internist.ai 7B - Physician-curated medical LLM scoring 60.5% on USMLE, built on Mistral-7B. First 7B model to pass medical benchmarks.
Brief-details: A sophisticated multi-dialect speech recognition model trained on 300K hours of unlabeled audio data, supporting 30 Chinese dialects including Cantonese, Shanghai, and Sichuan dialects.
Brief Details: A powerful 24.2B parameter Mixture of Experts (MoE) model combining 4 specialized 7B experts for chat, code, roleplay & math tasks. 8k context.
Brief Details: A Vietnamese language model pre-trained on 14GB corpus, achieving SOTA performance on social media tasks with 97.6M parameters.
Brief-details: A powerful 7B parameter Mistral-based model fine-tuned with DPO, showing strong performance across benchmarks. Features ChatML format and improved reasoning capabilities.