BRIEF DETAILS: An 8B parameter uncensored Llama-3 variant with strong performance across reasoning tasks (66.18% avg). Features high accuracy on HellaSwag (77.88%) and impressive math capabilities.
Brief-details: Advanced face ID-conditioned image generation model that combines face recognition embeddings with CLIP for creating highly personalized images while maintaining identity consistency.
Brief Details: BERTweet-based emotion analysis model with 135M parameters, trained on EmoEvent corpus. Specializes in English emotion detection, non-commercial use.
BRIEF-DETAILS: A fine-tuned Hindi speech recognition model based on Wav2Vec2-XLSR-53, optimized for 16kHz audio with 316M parameters. Achieves 72.62% WER on Common Voice Hindi test set.
Brief Details: A powerful ELECTRA-based cross-encoder model trained on MS Marco passage ranking, achieving 71.99 NDCG@10 on TREC DL 19 with 36.41 MRR@10 on MS Marco Dev.
Brief-details: A 335M parameter text embedding model fine-tuned on MEDI dataset and MTEB Classification, optimized for semantic search without requiring instructions.
Brief Details: A specialized semantic search model with 384-dimensional vectors, trained on MS MARCO dataset. 22.7M params optimized for sentence similarity.
BRIEF DETAILS: Lightweight MobileNetV3 variant (2.06M params) for image classification, trained on ImageNet-1k. Optimized for minimal computation with TensorFlow origins.
Brief Details: Vicuna-7b-v1.5: A powerful chatbot fine-tuned from Llama 2, trained on 125K ShareGPT conversations, designed for research and NLP tasks.
Brief Details: Stable Diffusion v2 - Advanced text-to-image model with improved photorealism, trained on LAION-5B dataset, supports 768x768 resolution and v-objective training.
BRIEF DETAILS: Advanced inpainting model based on SDXL, capable of high-quality 1024x1024 image editing with mask-guided generation and text prompts.
Brief-details: FLAN-T5-small is a 77M parameter instruction-tuned language model from Google, supporting 50+ languages and optimized for text-to-text generation tasks.
Brief Details: A powerful Russian-to-English translation model by Helsinki-NLP, achieving BLEU scores of 30-34 on news datasets and 61.1 on Tatoeba.
Brief-details: Powerful 72.7B parameter LLM with enhanced capabilities in coding, math, and multilingual support. Features 128K token context length and 8K token generation.
Brief Details: Universal image segmentation model applying Swin Transformer backbone for semantic, instance & panoptic segmentation, with COCO dataset optimization
BRIEF-DETAILS: GPTQ-quantized version of Mistral-7B-Instruct-v0.2 optimized for efficient inference, offering multiple quantization options (4-bit/8-bit) with Act-Order and various group sizes. Apache 2.0 licensed.
Brief Details: A sentence similarity model based on DistilBERT, optimized for semantic search with 768-dimensional embeddings and 66.4M parameters
Brief Details: Wav2Vec2-based model for age and gender recognition from speech, featuring 318M parameters. Fine-tuned on 4 datasets with 24 transformer layers.
Brief Details: DistilBERT multilingual model - 134M parameter transformer supporting 104 languages, 2x faster than mBERT with comparable performance.
Brief-details: T5-large is a 738M parameter text-to-text transformer model by Google, trained on C4 dataset supporting multiple languages and tasks like translation, summarization.
Brief Details: A multilingual NER model covering 11 Indian languages, trained on Samanantar corpus using BERT architecture. MIT licensed with 478K+ downloads.