Brief-details: State-of-the-art multilingual embedding model ranking #1 on MTEB benchmark, built on Qwen2-7B with 3584-dim embeddings and 32k context window
Brief Details: Sentence embedding model trained on 1B+ sentence pairs using MiniLM-L6 architecture. Optimized for semantic similarity and retrieval tasks.
Brief Details: Multilingual speech recognition model based on wav2vec2-large-lv60, fine-tuned on CommonVoice for phonetic transcription. Supports 16kHz audio input.
Brief-details: A 3.61B parameter uncensored language model available in multiple GGUF quantization formats, optimized for efficient deployment and conversational tasks.
Brief Details: MPT-7B is a 7B-parameter decoder transformer trained on 1T tokens, featuring ALiBi positioning and commercial-use license. Built for efficiency and long contexts.
Brief-details: ELECTRA large discriminator model fine-tuned on SQuAD2.0, achieving 87.1% exact match and 90% F1 score for question answering tasks
Brief Details: A Stable Diffusion XL text-to-image model with significant community adoption (40K+ downloads) focusing on specialized image generation capabilities.
BRIEF DETAILS: Optimized English speech recognition model using CTranslate2, derived from Whisper tiny.en. Features fast transcription with MIT license and 40K+ downloads.
Brief Details: Specialized 1.5B parameter math-focused LLM supporting both Chain-of-Thought and Tool-integrated Reasoning for English/Chinese problems.
Brief Details: DreamShaper 8 inpainting - Advanced stable diffusion model for image inpainting, supporting both realistic and anime styles with improved LoRA compatibility.
Brief-details: Kandinsky 2.1 Prior model - Advanced text-to-image diffusion model combining CLIP and latent diffusion, featuring innovative image prior mapping between CLIP modalities.
Brief Details: MetaCLIP base model trained on 400M CommonCrawl images, enabling zero-shot image classification and text-image linking with 32-pixel patch resolution.
Brief Details: StructTable-base is a 324M parameter image-to-text model specialized in converting table images to LaTeX code, supporting both English and Chinese tables from scientific documents.
Brief Details: Chinese text embedding model optimized for multiple tasks, featuring improved synthetic data and unified circle loss approach. Strong performance on CMTEB benchmark.
Brief-details: KcELECTRA-base-v2022 is a Korean-focused ELECTRA model trained on user-generated content and noisy text, optimized for comment analysis with 475M parameters and MIT license.
Brief Details: A specialized Vietnamese language embedding model based on PhoBERT, featuring 135M parameters and achieving 84.87% accuracy on STSB benchmark.
BRIEF DETAILS: BLIP-2 model with OPT-2.7b backbone, specialized in image-to-text tasks with 3.87B parameters. Features frozen image encoders and supports captioning and VQA.
Brief Details: A text-to-image diffusion model built on SDXL pipeline with 40K+ downloads. Popular for generating specific artistic content with SafeTensors support.
Brief Details: CodeBERT model fine-tuned on C++ code for masked language modeling, optimized for code evaluation and analysis tasks with 1M training steps.
BRIEF DETAILS: 14B parameter instruction-tuned language model from Microsoft's Phi-3 family, optimized for 4K context, with strong reasoning and code capabilities.
Brief Details: A Catalan speech recognition model based on wav2vec2-large-xlsr-53, achieving 6.92% WER on test data. Optimized for 16kHz audio input.