Brief Details: MT5-small model fine-tuned for English-Spanish translation of video game reviews from Amazon, optimized for e-commerce content translation.
BRIEF DETAILS: Multilingual NER model supporting 40 languages, fine-tuned on XLM-RoBERTa base. Identifies LOC, ORG, and PER entities with 87% average F1-score.
BRIEF-DETAILS: A specialized DialoGPT model fine-tuned on Assassin's Creed Odyssey dialogue, designed to generate game-like conversations using transfer learning.
Brief-Details: Fine-tuned XLS-R 1B model for Russian speech recognition, trained on Common Voice 8.0, Golos, and TEDx. Requires 16kHz audio input.
Brief-details: BART-tiny-random is a lightweight, randomly initialized version of BART architecture, useful for testing and development purposes. Created by sshleifer on HuggingFace.
Brief Details: HelpingAI2-3B: A 3B parameter emotionally intelligent LLM optimized for empathetic conversations with 89.61 emotion score and 128k context window.
Brief Details: A lightweight 68M parameter LLaMA-based model optimized for VLLM and Medusa inference, featuring random initialization parameters.
Brief-details: MLPerf GPT-J-6B is a fine-tuned version of GPT-J-6B optimized for MLPerf inference benchmarking, developed by Furiosa AI for performance evaluation.
Brief Details: Optimized 7B vision-language model with 4-bit quantization, 60% less memory usage, and 1.8x faster training. Excels at image/video understanding and structured outputs.
BRIEF DETAILS: rRealism v1.0 is a specialized AI image generation model focused on achieving photorealistic results, created by digiplay and hosted on HuggingFace.
Brief Details: ERNIE 2.0 Large English - Baidu's continual pre-training framework that outperforms BERT and XLNet on GLUE benchmarks through multi-task learning
Brief Details: A quantized 7B parameter math-focused LLM optimized for mathematical reasoning and computations, featuring chain-of-thought prompting and commercial use support.
Brief-details: Korean SBERT model trained on KorSTS and KorNLI datasets, mapping sentences to 768-dimensional vectors for semantic search and clustering.
Brief-details: BERTOverflow: A BERT-base model pre-trained on 152M StackOverflow sentences, specialized for code and named entity recognition in technical discussions.
Brief-details: A specialized Q&A model focused on music production topics, hosted on HuggingFace by jacksonargo. Designed to provide accurate responses to music production queries.
Brief-details: A masked language model specialized for music processing, created by jacksonargo and hosted on HuggingFace, designed for understanding musical patterns and sequences
BRIEF DETAILS: Basque language BERT model achieving SOTA on 4 downstream tasks (POS, NER, sentiment, topic classification). Trained on 224.6M tokens.
BRIEF-DETAILS: Large-scale Indonesian BERT model (335.2M params) trained on 23.43GB text data with MLM and NSP objectives, ideal for Indonesian NLP tasks
BRIEF DETAILS: GPT-2-based text generation model fine-tuned on Twitter data (2,651 tweets) for generating specific relationship content. Created by huggingtweets.
BRIEF-DETAILS: GPT-2-based tweet generation model trained on @gamerepulse's tweets. Fine-tuned on 321 filtered tweets for gaming-related content generation.
BRIEF-DETAILS: Efficient transformer-based model for long sequence time-series forecasting with O(L logL) complexity, featuring ProbSparse attention and self-attention distilling