BRIEF DETAILS: Deformable DETR: Advanced object detection transformer model with 40.2M parameters, trained on COCO dataset. Features ResNet-50 backbone and innovative deformable attention mechanism.
Brief Details: Advanced object detection transformer model with deformable attention and two-stage refinement. 41.3M params, COCO-trained, Apache 2.0 licensed.
Brief Details: Vietnamese speech recognition model based on wav2vec 2.0, fine-tuned on 250 hours of labeled data with 95M parameters, achieving 6.15% WER on VIVOS dataset.
Brief-details: A fine-tuned DistilRoBERTa model for financial sentiment analysis, achieving 88.35% F1-score on financial text classification with Covid-19 awareness.
Brief Details: T5-base model fine-tuned for sarcasm detection on Twitter data. 223M parameters, achieves 83% accuracy, specializes in context-aware irony detection.
Brief Details: T5-base model fine-tuned for news summarization, 223M parameters, trained on 4,515 news articles for concise summary generation
BRIEF-DETAILS: Vision Transformer model fine-tuned for bean disease classification, achieving 94.5% accuracy on test set. Built on ViT-base architecture with proven performance in agricultural applications.
Brief-details: T5-base model fine-tuned for commonsense text generation. 297M parameters, achieves ROUGE-L 39.47. Specializes in generating coherent everyday scenarios from concept sets.
Brief-details: A specialized privacy-focused language model pre-trained on ~1M privacy policies, built on RoBERTa for analyzing and understanding privacy policy documents.
Brief Details: T5-base model fine-tuned for emotion recognition, capable of classifying text into 6 emotions with 93% accuracy. Popular with 10K+ downloads.
Brief-details: CodeBERT model fine-tuned for detecting insecure code patterns. Achieves 65.30% accuracy on test set, improving upon baseline models. Built for binary classification of code security.
Brief Details: Spanish DistilBERT model fine-tuned for Q&A tasks, offering faster performance through knowledge distillation from BERT. Trained on SQuAD2.0-es dataset.
Brief-details: Turkish BERT2BERT model specialized in text summarization with 140M parameters, trained on MLSUM dataset. Achieves 29.48% Rouge2 F-measure on test set.
Brief-details: BERT-Tiny model fine-tuned on SQuAD v2.0 for question-answering tasks. Achieves 57.12% EM and 60.86% F1 score. Compact 24.34MB size ideal for resource-constrained environments.
Brief-details: Hindi ELECTRA-based language model with 14.7M parameters, designed for Hindi NLP tasks including classification and sentiment analysis. Trained on CommonCrawl and Wikipedia data.
Brief-details: BERT model fine-tuned specifically for Thai language processing, featuring 106M parameters and trained on Thai Wikipedia data. Offers strong performance on XNLI and review classification tasks.
Brief-details: KoELECTRA v3 is a Korean ELECTRA-based language model optimized for discriminative tasks, featuring advanced tokenization and pretrained capabilities for Korean text processing.
Brief Details: A lightweight BERT-based model with 6 layers and 256 hidden dimensions, achieving 8.7x speedup over BERT-base while maintaining strong performance on NLP tasks.
Brief-details: Advanced Dutch OCR correction model based on ByT5, fine-tuned on OSCAR dataset. Specializes in fixing OCR errors in Dutch text with high accuracy.
Brief-details: A specialized T&C summarization model based on DistilBART, fine-tuned on TOSDR data for summarizing Terms of Service documents using a hybrid extractive-abstractive approach.
Brief Details: A distilled BERT variant with 12 layers & 384 hidden size, achieving 2.7x speedup while maintaining 88.5% GLUE score average. Created by Microsoft.