Brief Details: DINOv2-trained Vision Transformer for image feature extraction, 86.6M params, self-supervised on LVD-142M dataset, handles 518x518 images
Brief-details: BERT-based multi-label emotion classifier trained on sem_eval_2018_task_1, capable of detecting 11 distinct emotions with high accuracy for text analysis.
Brief Details: BERT model fine-tuned for Part-of-Speech tagging with 109M parameters, supporting 17 PoS tags. High download count (88.5K) indicates reliability.
Brief Details: Speech emotion recognition model based on wav2vec2, achieving 82.23% accuracy for 8 emotion classifications. 316M parameters, Apache 2.0 licensed.
Brief Details: DPR reader model trained on Natural Questions dataset for open-domain QA, utilizing BERT architecture for efficient passage retrieval and answer extraction.
Brief-details: Vision Transformer model fine-tuned for pneumonia detection in chest X-rays with 97.42% accuracy. Based on ViT-base architecture with 85.8M parameters.
BRIEF-DETAILS: Openjourney v4 is a Stable Diffusion-based text-to-image model trained on 124k+ Midjourney v4 images, offering high-quality image generation without style prefixing.
Brief Details: A 12.2B parameter SLERP-merged multilingual model supporting 9 languages, trained on 8 diverse datasets with ChatML format for enhanced conversational abilities.
Brief Details: A lightweight Chinese NLP model based on ALBERT architecture, specialized in word segmentation tasks. Features traditional Chinese support and 89k+ downloads.
Brief-details: Llama 3's 8B instruction-tuned model optimized for 4-bit precision, offering enhanced performance with 58% less memory usage and 2.4x faster inference
Brief-details: BLEURT-20-D12 is a PyTorch-based text classification model for evaluating text similarity, offering high-precision scoring for reference-candidate pairs
Brief-details: Yi-1.5-9B-Chat-16K is an 8.83B parameter chat model with 16K context length, built on the Yi architecture with enhanced coding, math, and reasoning capabilities.
Brief Details: A 70M parameter language model from EleutherAI's Pythia suite, trained on deduplicated Pile data. Designed for AI research and interpretability studies.
Brief Details: Multilingual NER model supporting 10 languages, fine-tuned on mBERT base. Identifies LOC, ORG, and PER entities. 177M params, 89.8K downloads.
Brief Details: Japanese DeBERTa V2 base model (137M params) trained on Wikipedia, CC-100, and OSCAR. Optimized for masked language modeling with 0.779 accuracy.
Brief-details: YiffyMix is a specialized text-to-image diffusion model optimized for artistic and furry content generation, featuring MoistMixV2 VAE integration
Brief Details: Korean language sentence similarity model (111M params) based on RoBERTa, fine-tuned on KLUE datasets with continue-learning approach achieving 0.89 Pearson correlation.
Brief-details: Korean-English bilingual LLM based on Llama-3, featuring 8B parameters, vocabulary expansion, and Korean cultural alignment. SOTA performance on LogicKor benchmark.
BRIEF DETAILS: RoBERTa-based NER model with 354M params, achieving 97.5% F1 score on CoNLL-2003. Specialized in entity recognition for informal text like emails/chats.
Brief Details: Midjourney is a text-to-image model built on FLUX.1-dev, featuring specialized image generation capabilities with 90K+ downloads and non-commercial license.
Brief Details: VulBERTa-MLP-ReVeal is a 125M parameter model for detecting security vulnerabilities in source code, achieving 64.71% accuracy with RoBERTa architecture.