Brief Details: Optimized 4-bit quantized version of Mistral-7B-Instruct, offering efficient inference with multiple GPTQ configurations and ExLlama compatibility
Brief-details: GIST-small-Embedding-v0 is a 33.4M parameter text embedding model fine-tuned on MEDI dataset and MTEB Classification data, optimized for semantic similarity tasks without requiring instructions.
Brief-details: Universal AnglE Embedding model with 335M parameters achieving SOTA performance on MTEB benchmark, optimized for sentence embeddings and similarity tasks
Brief Details: Spanish sentiment analysis model trained on Twitter data, using RoBERTa architecture. 109M parameters, specialized for social media text analysis.
Brief Details: LLaVA-v1.5-7B is a powerful multimodal chatbot combining vision and language capabilities, built on LLaMA architecture with 1.1M+ downloads
Brief Details: FinBERT-tone is a specialized financial sentiment analysis model, fine-tuned on 10,000 manually annotated sentences from analyst reports for tone detection.
Brief-details: French part-of-speech tagging model based on CamemBERT, trained on free-french-treebank dataset. 111M params, supports 29 POS tags.
Brief-details: SegFormer B1 model fine-tuned on ADE20k dataset for semantic segmentation tasks, featuring hierarchical Transformer encoder and MLP decoder head at 512x512 resolution.
Brief Details: SDXL-Turbo is a high-speed text-to-image model capable of generating photorealistic images in a single step, based on SDXL 1.0 using Adversarial Diffusion Distillation.
BRIEF DETAILS: Vision Transformer model for NSFW image detection with 86.1M params. Achieves 96.54% accuracy and 99.48% AUC score. Fine-tuned on 25k diverse images.
Brief Details: Whisper base.en is a 74M parameter English ASR model trained on 680k hours of data, offering robust speech recognition with 4.27% WER on LibriSpeech test-clean.
Brief-details: A lightweight Qwen2-based model with 1.22M parameters, optimized for text generation and conversational tasks using F32 tensor type.
Brief-details: A powerful multilingual text embedding model with 560M parameters supporting 94 languages, optimized for retrieval and semantic similarity tasks. Outperforms previous models on MTEB benchmarks.
Brief Details: GPT-2 Large (774M parameters) - Advanced language model by OpenAI for text generation and NLP tasks. Known for robust language understanding and generation capabilities.
Brief-details: State-of-the-art multilingual text embedding model supporting 70+ languages with 305M parameters, offering dense and sparse vector representation capabilities
Brief-details: RoBERTa-based QA model fine-tuned on SQuAD2.0, achieving 79.9% exact match accuracy. Ideal for extractive question answering tasks.
Brief-details: A powerful Chinese language embedding model optimized for text similarity and retrieval tasks, achieving SOTA performance with 1024d embeddings and improved similarity distribution.
BRIEF DETAILS: BERT-based language model trained on 1.14M scientific papers (3.1B tokens) with custom vocabulary, optimized for scientific text processing.
Brief-details: A PyTorch-based neural vocoder for high-quality audio synthesis, converting mel-spectrograms to waveforms using GAN architecture and Fourier transform techniques.
Brief Details: A specialized semantic search model with 66.4M parameters, trained on MS MARCO dataset. Maps text to 768D vectors for efficient similarity matching.
Brief-details: A fine-tuned version of Stable Diffusion 2-base with 220k additional training steps, designed for high-quality text-to-image generation at 512x512 resolution.