Brief Details: A specialized AI model by Kijai hosted on HuggingFace, focused on image generation compatibility with ComfyUI workflows and interfaces.
BRIEF DETAILS: Multilingual sequence-to-sequence model supporting 11 Indian languages + English, trained on 452M sentences. Optimized for Indic language generation tasks.
BRIEF-DETAILS: A Turkish paraphrase generation model using BERT2BERT architecture, trained on QQP dataset translations and manual data, published with INISTA 2021 research.
BRIEF-DETAILS: Wav2vec2-based speech recognition model specialized for Dogri language transcription, utilizing CTC loss for audio-to-text conversion
Brief Details: Speech recognition model for Wolof language, fine-tuned from XLS-R 300M with language modeling. Achieves 21.26% WER on evaluation data.
BRIEF DETAILS: A French language model based on CamemBERT, specifically adapted for Twitter content through pretraining on 15GB of French tweets
Brief Details: ALBERT-based fake news classifier achieving 97.58% accuracy, fine-tuned on Kaggle's Fake and Real News dataset. Optimized for binary classification.
BRIEF DETAILS: Chinese Longformer model optimized for long document processing (4K+ tokens) with linear complexity. Features whole-word masking and achieves comparable performance to BERT/RoBERTa on various NLP tasks.
Brief Details: MARBERTv2 is an enhanced Arabic language model with 512 token sequence length, pre-trained on 29B tokens, specifically optimized for Arabic QA tasks.
Brief-details: MARBERT is a large-scale Arabic language model trained on 1B tweets (128GB), supporting both MSA and Dialectal Arabic varieties, developed by UBC-NLP.
Brief-details: Uncased Finnish SBERT model trained on paraphrase data, optimized for semantic similarity tasks using mean pooling and FinBERT base
Brief Details: PPO-based reinforcement learning model for AntBulletEnv-v0 environment, achieving mean reward of 3547.01. Developed by ThomasSimonini using stable-baselines3.
Brief Details: Multi-agent snowball fight environment using Unity ML-Agents. Features 1v1 competitive gameplay with PPO training, reaching 1766 ELO after 5.1M steps.
BRIEF-DETAILS: GFPGAN is a sophisticated AI model for blind face restoration, using generative facial priors to enhance low-quality face images with realistic details in a single pass
Brief Details: Bilingual English-German text summarization model based on mT5-small, trained on 724k examples with strong ROUGE scores for news summarization.
BRIEF DETAILS: German RoBERTa-based sentence transformer optimized for generating semantic embeddings, with cross-lingual capabilities and strong German language performance.
BRIEF-DETAILS: XLM-RoBERTa-based multilingual formality classifier supporting EN, FR, IT, PT with high accuracy (79.4% overall, 85.2% for English)
Brief-details: 8B parameter AWQ-quantized language model by circulus, optimized for general-purpose text generation and dialogue, available on Hugging Face
Brief-details: A 24B parameter instruction-tuned LLM optimized by Unsloth for 4-bit inference, offering 70% less memory usage and 2x faster performance
Brief Details: Microsoft's Phi-4 (14B params) optimized for 4-bit quantization by Unsloth. Offers 2x faster training with 50% less memory. Excels in reasoning and math.
BRIEF-DETAILS: BERT-LoRA model for malicious URL detection with 98% accuracy. Classifies URLs into benign, defacement, phishing, and malware categories. 110M parameters.