Brief-details: U-Net-based deep learning model for automatic teeth segmentation in panoramic X-ray images, developed by researchers at Yeditepe University for dental diagnostics
Brief-details: ESPnet ASR model using Conformer encoder and Transformer decoder for speech recognition and sentiment analysis on Switchboard corpus
Brief Details: Custom Legal-BERT model trained on 37GB of Harvard Law cases (3.4M decisions) using MLM/NSP objectives with legal-specific tokenization and 32k vocab.
Brief-details: DistilBERT-based model fine-tuned for financial relation extraction, achieving 82.4% F1-score across 5 relationship types. Specializes in detecting and classifying financial term relationships.
BRIEF DETAILS: Ukrainian RoBERTa base model trained on 85M+ lines of text data including Wikipedia and social media. 125M parameters, 12-layer architecture.
Brief Details: BERT base model fine-tuned on SST-2 dataset using knowledge distillation from larger BERT model, achieving 78.9 GLUE score. Published at EMNLP 2023.
Brief-details: AutoNLP-trained sentiment analysis model for IMDB reviews with 93.88% accuracy and 0.983 AUC score. Excellent for binary text classification tasks.
Brief Details: T5-base Dutch language model with 223M parameters, pre-trained on cleaned Dutch mC4 dataset. Achieves 0.70 evaluation accuracy for masked language modeling.
Brief-details: Fine-tuned Wav2Vec2 model for Mandarin Chinese ASR, trained on Common Voice zh-CN/TW datasets, achieving 20.9% CER on test set. Optimized for 16kHz audio input.
Brief Details: Arabic Question-Answering model based on AraELECTRA, specialized for Arabic Wikipedia QA tasks. Fine-tuned on ArTyDiQA dataset.
Brief-details: Japanese speech recognition model based on wav2vec2-large-xlsr-53, fine-tuned on Common Voice and JSUT corpus, achieving 30.84% WER for Japanese ASR
Brief-details: Fine-tuned Wav2Vec2-Large-XLSR-53 model for Japanese speech recognition with hiragana output, achieving 24.74% WER and 10.99% CER on Common Voice test set.
Brief Details: A fine-tuned PEGASUS model specialized for summarizing legislative bills, achieving strong ROUGE scores (R1:56.87, R2:38.65) on the BillSum dataset
Brief Details: Multilingual speech recognition model supporting 56 languages, fine-tuned on Common Voice dataset. Best performance on Spanish (WER 19.63%) and Esperanto (CER 6.23%).
Brief Details: BARTpho-syllable is a pioneering Vietnamese sequence-to-sequence model using BART architecture, optimized for text generation and summarization tasks
Brief-details: Falcon3-1B-Instruct: 1B-parameter multilingual LLM supporting 4 languages, 8K context length, optimized for STEM and reasoning tasks, using GQA architecture
Brief Details: BERT-Large model optimized for MLPerf benchmarking by Furiosa AI. Likely focused on performance optimization and inference speed testing.
BRIEF DETAILS: Fine-tuned ModernBERT model for question-answering on SQuAD v2, featuring 149M parameters and 8192 token context length. Achieves 83.96% exact match accuracy.
Brief-details: DeepForest-tree is an AI model for detecting tree crowns in RGB aerial imagery, combining LiDAR-based unsupervised learning with hand-annotated RGB data for improved accuracy.
Brief-details: RGB-to-X is a specialized model designed for converting RGB color space to other color formats, useful for image processing and color space transformations.
Brief-details: Advanced 4B-parameter multimodal LLM combining InternViT vision encoder with Qwen2.5-3B-Instruct, capable of processing images, videos, and text with strong reasoning capabilities.