Brief-details: TinySapBERT is a compact biomedical entity representation model, distilled from PubMedBERT and trained using SapBERT methodology for efficient biomedical NER tasks.
Brief Details: Korean question-answering LLM built on Llama-3.2-1B-Instruct, fine-tuned on KorQuAD dataset with 1.24B parameters. Achieves 36.07% EM score.
Brief-details: Vision Transformer model trained on ImageNet-21k and fine-tuned on ImageNet-1k featuring 88.2M params, 32x32 patch size, and augmentation techniques.
Brief-details: Multilingual text-to-text transformer model with 300M parameters, trained on xP3 dataset, supporting 101 languages for tasks like translation and summarization.
Brief-details: A powerful 8.48B parameter image captioning model built on Llama 3.1, designed for unrestricted, diverse image description generation with both SFW/NSFW support
Brief Details: A 3.2B parameter uncensored LLaMA model variant focused on instruction-following, featuring BF16 precision and built using ablation techniques.
Brief-details: German BERT-large model fine-tuned for sentence similarity, optimized with cosine similarity. 1024-dimensional embeddings, MIT licensed, ideal for German few-shot classification.
Brief Details: Japanese sentiment analysis BERT model fine-tuned on Amazon reviews. Achieves 81.3% accuracy for 3-class classification. 22k+ downloads.
Brief Details: ParsBERT - A powerful Persian language BERT model trained on 3.9M documents and 1.3B words, achieving SOTA performance in NLP tasks.
Brief Details: GPTQ-quantized version of Gemma 1.1 2B instruction-tuned model, optimized for efficient deployment with 4-bit precision and 1.32B parameters.
Brief-details: SegFormer b5 encoder model pre-trained on ImageNet-1k, designed for semantic segmentation with transformers. Features hierarchical architecture and MLP decode head.
Brief-details: Italian BERT model for sentiment analysis, fine-tuned on tweets with 82% accuracy. Handles negative, neutral, and positive sentiment classification on Italian text.
Brief-details: Advanced anime text-to-image model built on SDXL, featuring improved hand anatomy and concept understanding. Trained on 1.2M+ images with sophisticated tag ordering system.
Brief-details: Facebook's WMT19 Russian-to-English translation model with 291M parameters, achieving 39.20 BLEU score. Supports neural machine translation using FSMT architecture.
BRIEF-DETAILS: Realistic Vision V3.0 is a specialized text-to-image model with integrated VAE, optimized for photorealistic generation with recommended CFG scale 3.5-7 and Euler A/DPM++ SDE Karras samplers.
Brief Details: Zero-1-to-3 model for converting single images to 3D objects, based on diffusion technology. MIT licensed with 22K+ downloads. Research-focused tool.
Brief-details: Neural machine translation model from Helsinki-NLP for Arabic to English translation, achieving 47.3 BLEU score on Tatoeba test set, using Marian NMT framework.
Brief-details: Facebook's multilingual speech model for language identification, supporting 256 languages with 966M parameters. Built on Wav2Vec2 architecture for audio classification.
Brief-details: BERT-based language model specifically trained on 100M+ patents, featuring 346M parameters and optimized for patent-specific text analysis and masked language modeling.
Brief-details: Financial sentiment analysis model based on RoBERTa-large, specialized for financial texts including earnings reports, CSR documents, and ESG news. Supports 3-way classification.
Brief-details: CAFormer B36 vision model pretrained on ImageNet-22k, fine-tuned on ImageNet-1k. 98.8M parameters, optimized for image classification and feature extraction.