Brief-details: Realistic Vision v1.2 is an AI image generation model by Yntec, specialized in creating photorealistic outputs with particular strength in human portraits and facial details.
Brief-details: Quantized version of DeepSeek-R1 that maintains full accuracy while reducing size by 75%. Offers 66.40 tokens/sec speed and only 1228MB RAM usage.
BRIEF-DETAILS: An advanced AI model for generating talking head animations from single images, developed by vinthony. Enables realistic facial animations synchronized with audio input.
Brief-details: ESPnet2 TTS model using FastSpeech2 architecture trained on LJSpeech dataset, developed by kan-bayashi for high-quality English speech synthesis
Brief Details: A ResNet34-based pet breed classifier fine-tuned on Oxford-IIIT Pet Dataset, built with fastai. Pre-trained on ImageNet with 200+ classes for robust pet identification.
Brief-details: A multilingual NACE code classifier based on XLM-RoBERTa, fine-tuned on 2.5M business descriptions in multiple European languages for accurate activity classification.
Brief-details: Turkish ConvBERT model fine-tuned for NLI tasks achieving 81.11% accuracy. Optimized with Adam optimizer over 3 epochs showing strong performance for Turkish language understanding.
Brief Details: An Italian T5-based model trained on 28k news articles for generating topic tags and enabling asymmetric semantic search functionality.
BRIEF DETAILS: Decision Transformer implementation for Atari games, offering pretrained models for Breakout, Pong, Qbert and Seaquest. Uses GPT architecture for gameplay decision-making.
Brief-details: BioBERT v1.1 is a biomedical language model pre-trained on PubMed abstracts and PMC full-text articles, built upon BERT to enhance biomedical text mining capabilities.
BRIEF DETAILS: BioBERT Large Cased model fine-tuned on SQuAD, specialized for biomedical question-answering tasks. Built by DMIS-lab, based on BERT architecture with biomedical corpus training.
Brief-details: Polish BERT language model (cased version) with 110M parameters, trained on diverse Polish corpora. Optimized for NLP tasks with Whole Word Masking.
Brief-details: German text summarization model based on mT5-small, MIT-licensed for commercial use. Trained on SwissText dataset with 84k examples. ROUGE-1: 16.80.
Brief Details: Multilingual XLM-RoBERTa large model fine-tuned on SQuAD 2.0 for extractive QA, supporting multiple languages with strong performance.
BRIEF-DETAILS: German ELECTRA large language model optimized for German NLP tasks, achieving SOTA performance on GermEval benchmarks. Trained by deepset.
BRIEF-DETAILS: German ELECTRA-based QA model trained on GermanQuAD dataset. Specializes in extractive question answering with strong performance on German text.
BRIEF-DETAILS: German BERT large language model trained by deepset, optimized for German NLP tasks with state-of-the-art performance on GermEval benchmarks.
Brief-details: A compact BERT-based question encoder optimized for multi-modal retrieval tasks, developed by deepset for efficient question understanding and information retrieval.
Brief-details: Perceiver IO vision model pre-trained on ImageNet, using learned position embeddings for image classification with 72.7% top-1 accuracy on 1K classes.
Brief-details: A speech recognition model fine-tuned on Slovak Common Voice 8.0, achieving 49.57% WER and 13.33% CER, based on wav2vec2-xls-r-300m architecture
BRIEF DETAILS: A fine-tuned Whisper-tiny model optimized for Russian dysarthric speech recognition, achieving 9.1% WER after training with Adam optimizer and linear learning rate scheduling.