Brief-details: English to Greek neural machine translation model by Helsinki-NLP, achieving 56.4 BLEU score on Tatoeba test set, based on transformer architecture
Brief Details: Transformer-based English to Bislama translation model by Helsinki-NLP achieving 36.4 BLEU score on JW300 test set, using SentencePiece normalization
Brief-details: German to Arabic neural machine translation model by Helsinki-NLP. BLEU: 19.7, chrF: 0.486. Handles multiple Arabic dialects.
Brief-details: A Helsinki-NLP transformer model for translating Bantu languages to English, supporting 12 source languages with strong performance on Xhosa (37.2 BLEU) and Zulu (40.9 BLEU) translations.
BRIEF DETAILS: Neural machine translation model for Bengali to English translation. Achieves 49.7 BLEU score. Built by Helsinki-NLP using transformer-align architecture.
BRIEF-DETAILS: French hate speech detection model based on monolingual BERT, achieving 0.69 validation score. Trained on English data and fine-tuned for French content analysis.
Brief-details: DialoGPT-small-shrek is a conversational AI model fine-tuned to mimic Shrek's distinctive personality and dialogue style from the Shrek franchise, built on Microsoft's DialoGPT architecture.
BRIEF-DETAILS: Italian GPT-2 small model by GroNLP with retrained embeddings and full model fine-tuning, optimized for Italian language generation and NLP tasks.
BRIEF DETAILS: XLS-R-based Dutch speech recognition model with 2B parameters, featuring 5-gram language model and Hunspell typo correction. Achieves 3.93% WER on Common Voice 8.0.
Brief-details: RoBERTa large model for Finnish language processing, trained on 78GB text data including news and web content. Optimized for MLM tasks and downstream fine-tuning.
Brief-details: Transformer-based model for gene expression prediction from DNA sequences, achieving ~0.45 Pearson R correlation on human data with 196,608 bp input sequences.
BRIEF DETAILS: Trilingual BERT model (Lithuanian, Latvian, English) built on xlm-roberta-base architecture, outperforming mBERT in NER tasks
Brief Details: DialoGPT-medium-asunayuuki is a conversational AI model based on DialoGPT-medium, fine-tuned to emulate Asuna Yuuki's character dialogue patterns.
Brief Details: Russian BERT model optimized for conversational AI, featuring 180M parameters and trained on diverse Russian text sources including social media and subtitles.
BRIEF-DETAILS: Specialized BERT model for Slavic languages (Bulgarian, Czech, Polish, Russian) with 180M parameters, trained on news and Wikipedia data
Brief-details: African NER model supporting 9 languages (Hausa, Igbo, etc.). Fine-tuned mBERT for detecting PER, LOC, ORG, DATE entities. F1-scores 66-89%.
Brief Details: A fine-tuned BERT model specialized for Yoruba language processing, achieving 82.58% F1 on MasakhaNER and 79.11% on text classification tasks.
Brief-details: BERT model fine-tuned on Swahili corpus, achieving 89.36% F1 score on MasakhaNER. Optimized for NER and text classification tasks.
Brief-details: BERT model fine-tuned on Igbo language texts, achieving 86.75% F1 score on MasakhaNER. Optimized for NER and text classification tasks in Igbo language.
Brief-details: Multilingual BERT model analyzing COVID-19 tweets in Belgium to track public sentiment towards pandemic measures, especially curfew policies
Brief Details: MADNet Keras - A lightweight, self-adapting deep stereo depth estimation model optimized for real-time performance with Keras implementation.