Brief Details: RuBERT-based sentiment analysis model for Russian texts with 178M parameters. Classifies texts into neutral, positive, or negative sentiment using transformer architecture.
Brief Details: A specialized LORA model for SDXL 1.0 that generates coloring book-style images, featuring clean line art suitable for coloring activities.
Brief-details: A faster CTranslate2-optimized English ASR model based on Distil-Whisper medium, offering efficient speech recognition with FP16 precision
Brief Details: A powerful sentence embedding model with 768-dimensional vectors, trained on 215M Q&A pairs for semantic search applications. 109M parameters.
Brief-details: PhoBERT base model - State-of-the-art Vietnamese language model based on RoBERTa architecture, with 43 likes and 174K+ downloads
Brief-details: SDXL-VAE is an advanced autoencoder for Stable Diffusion XL, offering improved high-frequency detail reconstruction and better performance metrics compared to the original SD VAE.
BRIEF-DETAILS: DeBERTa-based natural language inference model for zero-shot classification, trained on SNLI/MultiNLI datasets. Supports contradiction, entailment, and neutral classification.
Brief-details: Optimized French speech recognition model based on Whisper Large V3, achieving WER of 3.98-8.91% across datasets. 1.61B parameters, MIT licensed.
Brief Details: A cased BERT model for Turkish language processing with 111M parameters, trained on 35GB of Turkish text data including OSCAR corpus and Wikipedia dumps.
Brief-details: Named Entity Recognition model for English using Flair embeddings, achieving 89.27% F1-score on Ontonotes dataset with 18 entity classes
Brief Details: A fine-tuned CLIP model with 428M parameters, featuring Geometric Parametrization for improved ImageNet/ObjectNet accuracy (~0.91 vs original 0.84)
Brief Details: A compact Llama-architecture model (4.62M params) trained on TinyStories dataset, featuring BF16 precision and Apache 2.0 license. Ideal for text generation tasks.
Brief-details: Optimized English speech recognition model based on Whisper-small, converted to CTranslate2 format for faster inference with FP16 precision.
Brief Details: CLAP (Contrastive Language-Audio Pretraining) model optimized for audio-text matching and zero-shot classification, built on LAION-Audio-630K dataset
BRIEF DETAILS: ELECTRA small generator model from Google - a lightweight pre-training transformer for masked language modeling with 181K+ downloads. Uses discriminative learning approach.
Brief-details: RobBERT v2 is a state-of-the-art Dutch language model with 117M parameters, trained on 39GB of Dutch text. Excels at NLP tasks like sentiment analysis and NER.
BRIEF-DETAILS: Universal neural vocoder for high-quality audio generation, supporting 44kHz sampling rate with 128 mel bands and 512x upsampling. Built by NVIDIA for advanced audio synthesis.
Brief-details: Nougat-base is a 349M parameter vision-encoder-decoder model for converting academic PDFs to markdown, using Swin Transformer and mBART architecture.
Brief-details: Neural machine translation model for English to Spanish conversion, developed by Helsinki-NLP. Achieves 54.9 BLEU score on Tatoeba test set. Apache 2.0 licensed.
Brief Details: Qwen2.5-14B-Instruct is a powerful 14.8B parameter LLM with 128K token context, supporting 29+ languages and specialized in coding, math, and long-text generation.
Brief-details: Facebook's multilingual speech model pretrained on 53 languages, designed for speech recognition tasks. Features XLSR architecture with 16kHz audio processing and cross-lingual capabilities.