Brief-details: BERT model specialized for European Court of Human Rights (ECHR) cases, with 110M parameters, pre-trained on 12.5K ECHR legal documents.
Brief Details: A Greek BERT model trained on Wikipedia, EU Parliament data & OSCAR. 12-layer architecture with 110M parameters for Greek NLP tasks.
Brief-details: LayoutLMV2 model fine-tuned on FUNSD dataset for document understanding tasks, using Microsoft's base architecture with optimized training parameters
Brief-details: FlauBERT model fine-tuned for French recipe classification, capable of categorizing cooking instructions into 8 distinct categories with high accuracy
Brief-details: A specialized NLP model for classifying spoken text as questions or statements, trained on 7k+ interview samples, built for ASR applications
BRIEF-DETAILS: T5-small model fine-tuned on WikiSQL dataset for English-to-SQL translation, featuring 56K+ training samples and simple API integration.
Brief Details: T5-base model fine-tuned on break_data for converting QDMR decompositions back into natural language questions. 20k+ training samples.
BRIEF-DETAILS: Spanish GPT-2 model trained on 20GB Spanish corpus with 11.36 perplexity score. Collaborative effort during Flax/Jax Community Week.
Brief-details: A fine-tuned DistilRoBERTa model for detecting suicide and depression in tweets, achieving 71.58% accuracy. Not production-ready.
Brief Details: French text summarization model based on CamemBERT (RoBERTa), fine-tuned on MLSUM dataset. Achieves 13.30 F1 ROUGE-2 score.
BRIEF DETAILS: German BERT2BERT model fine-tuned for text summarization on MLSUM dataset, achieving 33.15 Rouge2 F-measure score. Built on bert-base-german-cased.
Brief Details: A transformers-based model by Ejada with limited public information. Requires further documentation on architecture, training data, and specific use cases.
Brief Details: A Hugging Face transformers model by Ejada with unspecified architecture and capabilities. Limited documentation available. Currently in development phase.
BRIEF-DETAILS: Language model trained on Europarl dataset for storytelling applications, focusing on English language generation with mixed-domain capabilities
BRIEF DETAILS: Fine-tuned 3B parameter model based on StableLM-3B-Zephyr, specialized for home automation control with function calling capabilities and multi-language support.
Brief-details: Multilingual BERT-based model for classifying questions as boolean or short-answer extractive, fine-tuned on TyDiQA dataset for enhanced multilingual QA capabilities.
BRIEF-DETAILS: BERT model fine-tuned for animacy detection - determines if entities are animate/living or inanimate/non-living objects in text
BRIEF-DETAILS: A fine-tuned variant of FLAN-T5-large optimized for web-based question answering tasks, focused on extracting accurate answers from web content.
BRIEF-DETAILS: KpfBERT is a BERT-based language model developed by jinmang2, designed for Korean language processing tasks and hosted on Hugging Face.
BRIEF-DETAILS: Whisper-enhanced-ml: Fine-tuned speech recognition model achieving 22.35% WER on Common Voice 11.0, trained with Adam optimizer over 500 steps
Brief-details: RoBERTa-based hate speech detection model trained on Dynabench R4 dataset, specialized in target-based hate speech classification