Brief Details: BERT bi-encoder model for scientific document similarity, trained on 1.2M biomedical paper pairs. Specializes in computer science text analysis and citation-based learning.
BRIEF DETAILS: Turkish DistilBERT model fine-tuned for emotion classification with 83.25% accuracy. Handles 6 emotions (joy, sadness, love, anger, fear, surprise) for Turkish text.
Brief Details: Unikud is an AI model by malper available on HuggingFace, with primary documentation hosted on DagsHub. Limited public information suggests experimental nature.
Brief Details: LayoutLMv3 model fine-tuned on FUNSD dataset for document AI tasks, achieving 90.59% F1 score. Specializes in unified text and image processing.
BRIEF-DETAILS: Multilingual RoBERTa model specialized for Bulgarian-English embeddings, trained on parallel data for semantic similarity tasks
BRIEF-DETAILS: StyleGAN-based anime face generator creating high-quality 512px anime character faces. Built on SeFa architecture for semantic face manipulation.
BRIEF DETAILS: BERT base model fine-tuned for Bulgarian language, based on MLM objective. Trained on OSCAR, Chitanka and Wikipedia datasets. Supports case-sensitive text processing.
BRIEF-DETAILS: LightWeight GAN model for NFT generation, specifically trained on the Azuki collection. Built by Aleksey Korshuk in 2022 for unconditional image generation.
Brief Details: A RoBERTa-based model fine-tuned for detecting depression severity levels (none, moderate, severe) from social media text with 0.54 macro F1-score.
Brief-details: T5-base Finnish language model with 814M parameters, trained on 76GB cleaned Finnish text data. Features 36 transformer layers and requires task-specific fine-tuning.
Brief-details: A GAN model trained on 1000 unique butterfly images to generate high-fidelity butterfly images at 512x512 resolution, based on the Light-GAN architecture for few-shot synthesis.
Brief-details: A fine-tuned RoBERTa model based on PlanTL-GOB-ES/roberta-base-bne, optimized for Mexican opinion polarity analysis with mixed-float16 precision training.
Brief Details: A powerful Hebrew-to-English neural machine translation model from Helsinki-NLP, reaching 53.8 BLEU score on Tatoeba test set. Part of OPUS-MT project.
Brief-details: Neural MT model by Helsinki-NLP for Greek-to-English translation. Built on transformer-big architecture with SentencePiece tokenization (32k). Achieves 68.8 BLEU on Tatoeba test.
BRIEF DETAILS: Large-scale English-to-Spanish neural translation model from Helsinki-NLP, achieving 57.2 BLEU on Tatoeba test set, trained on OPUS data.
BRIEF DETAILS: Neural MT model for English-Hungarian translation, based on transformer-big architecture with SentencePiece tokenization and OPUS training data. Strong BLEU scores on benchmarks.
BRIEF DETAILS: High-performance English-to-French neural translation model from OPUS-MT project, achieving BLEU scores of 53.2 on Tatoeba test set with transformer-big architecture.
Brief Details: ONNX-exported version of tiny-mbart model, optimized for efficient deployment and cross-platform compatibility in machine translation tasks
Brief-details: Vision Transformer (ViT) model trained on ImageNet-21k & fine-tuned on ImageNet-1k. 86.9M params, 384x384 input, optimized for classification.
BRIEF-DETAILS: Test version of PatchTSMixer time series model from IBM Research. Links to official pretrained model on Hugging Face for time series analysis.
Brief-details: Polish RoBERTa-based retrieval model optimized for semantic search, featuring 1024-dimensional vectors and trained via knowledge distillation from English models.