Brief-details: LayoutLMv3 model fine-tuned on PubLayNet dataset, achieving 95.1 mAP @ IOU for document layout analysis. Built for document AI tasks.
Brief-details: T5-small model fine-tuned for contradiction detection on SNLI dataset, achieving 34.42 ROUGE1 score after 8 epochs of training, optimized for text generation tasks.
Brief Details: KRISSBERT - A biomedical entity linking model leveraging UMLS ontology and PubMed data for contextual entity disambiguation, achieving SOTA results.
Brief-details: A GAN-based model for generating Bored Ape Yacht Club NFT-style images, built with MIT license by huggingnft for unconditional image generation
Brief Details: Neural MT model (232M params) for English to North Germanic languages translation. Supports Danish, Icelandic, Norwegian, Swedish. BLEU: 21-61.
Brief-details: A GAN-based model for generating CryptoPunk-style NFT images, built by Aleksey Korshuk with MIT license and Transformer architecture
Brief Details: A powerful English-to-Arabic neural translation model with 239M parameters, achieving 29.4 BLEU score on FLORES101, part of OPUS-MT project.
Brief Details: A Spanish sexism detection model fine-tuned on EXIST dataset, achieving 80% accuracy and 89% F2 score. Built on RoBERTuito for tweet analysis.
Brief-details: CodeGen-2B-multi is a 2B-parameter program synthesis model trained on multiple programming languages, specialized in converting natural language to executable code.
Brief Details: T5-small model fine-tuned for Spanish-Nahuatl translation, featuring 60.5M parameters. Trained on ~23K parallel sentences with innovative two-stage approach.
Brief Details: T5-based question-answering model fine-tuned on DuoRC dataset, achieving 49% F1 score on SelfRC. Created by Italian researchers for generative QA tasks.
Brief Details: Spanish text neutralizer model (60.5M params) for generating gender-neutral language. Built on T5, achieves 93.8% BLEU score.
Brief Details: T5-based Russian text correction model (223M params) specialized in fixing speech recognition outputs, particularly from wav2vec2 transcriptions.
Brief-details: DiffusionCLIP model trained on CelebA-HQ for text-guided face image manipulation, offering robust reconstruction and style transfer capabilities through diffusion-based approach
Brief Details: A fine-tuned GPT model based on my-gpt-model-3, utilizing TensorFlow framework with AdamWeightDecay optimization, achieving 4.9979 train loss.
BRIEF-DETAILS: T5-based business name generator that creates context-aware company names from business descriptions. Trained on 350k websites.
Brief Details: Turkish text summarization model based on mBART-large with 611M parameters, achieving 46.7 ROUGE-1 score on mlsum dataset
Brief-details: A Turkish text summarization model with 125M parameters, achieving 43.2 ROUGE-1 score on mlsum dataset. Built using transformer architecture and MIT licensed.
Brief-details: A CANINE-based text classification model fine-tuned on SST2 dataset achieving 85.78% accuracy, optimized for sentiment analysis tasks with linear learning rate scheduling.
Brief Details: A 570M parameter Pegasus model fine-tuned on newsroom data for text rewriting, achieving 46.69 ROUGE-1 score with strong summarization capabilities.
Brief-details: Spanish-optimized 6B parameter GPT-J model, fine-tuned on mC4-es dataset. Specializes in Spanish text generation with rotary positional embeddings.