BRIEF-DETAILS: Spanish language model trained on 500M tweets, optimized for social media analysis with strong performance in hate speech, sentiment, and irony detection.
Brief Details: GPT2-BioPT is a Portuguese biomedical language model based on GPT-2, fine-tuned on 110MB of medical literature with 16.2M tokens for generating domain-specific text.
BRIEF-DETAILS: Portuguese Clinical NER model for medical entity recognition, trained on Brazilian clinical corpus SemClinBr using BioBERTpt, specializing in UMLS-compatible entities.
Brief-details: Portuguese BERT model specialized for clinical and biomedical NER tasks, trained on clinical notes and biomedical literature, based on BERT-Multilingual-Cased
BRIEF-DETAILS: A quantized version of dolphin-2.9.4-gemma2-2b offering multiple compression variants from 5.24GB to 1.39GB, optimized for different RAM/VRAM configurations
BRIEF DETAILS: Qwen2.5-Coder-7B-Instruct is a 4-bit quantized code-specialized LLM with 7.61B parameters, 128K context length, and advanced code generation capabilities.
BRIEF-DETAILS: MuRIL-based multilingual sentence embedding model supporting 10+ Indian languages with cross-lingual capabilities, optimized for NLI tasks
Brief Details: A 95M parameter Russian spell-checking model that corrects spelling, punctuation, and case errors. Distilled from FRED-T5-1.7B with strong performance metrics.
Brief-details: InternVL2_5-1B is a 1B parameter multimodal LLM combining InternViT-300M vision encoder with Qwen2.5-0.5B language model, offering efficient visual-language capabilities.
BRIEF-DETAILS: Advanced vision-language model from Google that extends SigLIP with improved semantic understanding and localization capabilities, trained on WebLI dataset
Brief-details: A ConvNeXt-Base CLIP model trained on LAION aesthetic dataset with 13B samples, achieving 71.0% ImageNet zero-shot accuracy. Optimized for high-resolution image-text tasks.
Brief-details: Diffusers version of Pony Diffusion V6 XL Turbo DPO - A specialized model for generating pony-related imagery with enhanced performance and DPO optimization.
Brief-details: Chinese text emotion classifier capable of detecting 8 distinct emotional tones. Fine-tuned from xlm-roberta-large-xnli, optimized for Traditional Chinese with 4,000 annotated samples.
BRIEF-DETAILS: Advanced 70B parameter LLM combining Llama 3.3 and DeepSeek-R1 architectures, optimized for storytelling and dialogue with enhanced reasoning capabilities
Brief Details: A specialized FLAN-T5 Large model fine-tuned on the QuoRef dataset, optimized for question-answering tasks and text comprehension
Brief-details: Specialized LoRA model for generating whimsical 2D game-style illustrations with distorted, cartoonish aesthetics. Requires DOTTRMSTR trigger word. Built on FLUX.1-dev base model.
Brief Details: Fine-tuned BEiT model for pedestrian age recognition with 80.73% accuracy. Built on microsoft/beit-base-patch16-224-pt22k-ft22k architecture.
Brief-details: LLaVA Interleave is a multimodal chatbot based on Qwen1.5-7B-Chat, capable of processing multiple images, videos, and 3D inputs for research purposes.
Brief Details: ONNX-optimized version of RoBERTa zero-shot classifier, converted from Hugging Face for efficient deployment and inference using Optimum library.
Brief-details: A lightweight ResNet model (0.5M params) for ImageNet classification, designed for testing and validation. Features 160x160 input resolution and compact architecture.
Brief Details: KoGPT2-base-v2 is SKT's Korean language GPT-2 model, optimized for Korean text generation and understanding, built on transformers architecture.