Brief Details: RoBERTa-based sentiment classifier for book reviews & stories, 355M params, 3-class prediction (pos/neg/neutral), MIT licensed
BRIEF-DETAILS: Turkish NER model fine-tuned on SUNLP-NER-Twitter dataset (5000 tweets), achieving 82.69% F1 score for entity recognition tasks across 7 categories.
Brief-details: SpeechT5-VC is a unified speech-text model specialized in voice conversion, trained on CMU ARCTIC dataset with 4 speakers and supporting cross-modal processing.
Brief Details: Compact CLIP variant optimized for English, 8x smaller than original CLIP using xtremedistil-l6-h256 for text and edgenext_small for vision.
Brief-details: Neural translation model for North Germanic languages to Arabic, with 238M parameters. Supports Danish, Swedish to Arabic with BLEU scores 16.8-19.9.
Brief-details: YOLOS-small model fine-tuned for license plate detection, achieving 49% AP. Built on Vision Transformer architecture with DETR loss. Training: 5200 images, PyTorch compatible.
Brief Details: A Vision Transformer model fine-tuned for car classification, achieving 86% accuracy on Stanford Car Dataset with 196 car classes
Brief Details: VideoMAE large model fine-tuned on Kinetics-400 dataset. 304M parameters, achieves 84.7% top-1 accuracy for video classification tasks. Built on masked autoencoder architecture.
Brief Details: Legal-BERTimbau-base is a Portuguese BERT model fine-tuned for legal domain tasks with 110M parameters, supporting masked language modeling and embeddings generation.
Brief-details: A neural translation model (238M params) converting Arabic to North Germanic languages (Danish, Norwegian, Swedish) with BLEU scores of 20-29 on test sets.
Brief Details: IBM's Re2G reranker model combines neural retrieval and reranking for improved passage retrieval, showing 9-34% gains over SOTA on KILT tasks.
BRIEF DETAILS: Ad-Corre is an adaptive correlation-based loss model for facial expression recognition, achieving state-of-the-art results on RAF-DB dataset using Xception backbone and deep metric learning.
Brief-details: IBM's Re2G reranker model (109M params) for improving retrieval-augmented generation through neural reranking, trained on Natural Questions dataset
Brief Details: A Ukrainian speech recognition model based on Citrinet-1024 architecture, trained on 69 hours of data with 141M parameters and 5.02% WER.
Brief-details: DeBERTa-v3 model fine-tuned on SQuAD2.0 for extractive QA, achieving 83.8% exact match accuracy. 184M parameters, supports English text.
BRIEF-DETAILS: DialoGPT model fine-tuned for Nigerian Pidgin English conversations, specialized in restaurant/hotel domains with 38.52 perplexity score
Brief Details: MyanBERTa is a BERT-based pre-trained language model for Myanmar/Burmese, trained on 5.9M sentences with byte-level BPE tokenization.
Brief Details: Polish sentence similarity model based on DistilRoBERTa with 124M parameters, optimized for paraphrase detection and semantic text matching in Polish language.
Brief-details: A fine-tuned distilBERT model optimized for resume section classification, achieving 96.5% F1 score and 98% ROC AUC, trained over 20 epochs.
BRIEF-DETAILS: Ancient language translation model for Sumerian and Akkadian cuneiform text to English, built with T5 architecture and PyTorch framework.
BRIEF DETAILS: Russian BERT-based model for punctuation and case restoration, 426M params, designed for speech recognition text processing.