Brief Details: CrudeBERT - A specialized BERT model fine-tuned for sentiment analysis of crude oil news headlines, derived from FinBERT with domain adaptation for oil market predictions.
Brief-details: A specialized Arabic BERT model pre-trained on 167GB of mixed Arabic text (MSA, dialectal, classical), achieving strong performance across NLP tasks.
Brief-details: Arabic dialect identification model built on CAMeLBERT-Mix, fine-tuned on MADAR Corpus 26 dataset for classifying 26 Arabic dialects with high accuracy
BRIEF DETAILS: A DialoGPT-medium model fine-tuned to emulate Makise Kurisu from Steins;Gate, created by BlightZz for character-based conversational AI interactions.
Brief Details: A Yoruba language model for conversational dialog tasks, developed by Ayoola and hosted on HuggingFace, focused on African language NLP.
Brief-details: A BERT/RoBERTa-based text summarization model fine-tuned on CNN/DailyMail dataset, optimized with Adam optimizer and linear learning rate scheduling
Brief-details: German speech recognition model based on wav2vec2-xls-r-300m achieving 20.16% WER, trained on Mozilla Common Voice with 300M parameters for ASR tasks
BRIEF DETAILS: German speech recognition model based on wav2vec2-xls-r-1B, achieving 15.32% WER on evaluation. Fine-tuned on Mozilla Common Voice with strong performance metrics.
Brief-details: A Tamil speech recognition model fine-tuned from wav2vec2-large-xlsr-53, optimized for 16kHz audio input with direct CTC-based transcription capabilities.
BRIEF DETAILS: Multilingual XLM-RoBERTa large model fine-tuned for QA tasks on SQuAD and SberQuAD datasets, achieving 84.3 F1 score on Russian QA.
BRIEF-DETAILS: VoxLingua107 ECAPA-TDNN model for spoken language identification across 107 languages, trained on YouTube data with 93% accuracy on dev set
BRIEF DETAILS: Fine-tuned XLS-R 300M model for Kurmanji Kurdish speech recognition, achieving 38.86% WER. Built on Common Voice 7.0 dataset with extensive training optimization.
Brief-details: A fine-tuned wav2vec2-xls-r-300m model for Hausa speech recognition, achieving 32.9% WER on evaluation. Trained on Common Voice datasets with 50 epochs.
Brief-details: A Wav2vec2 speech recognition model fine-tuned for the Hausa language, based on the XLS-R 300M multilingual architecture. Developed by AbdulmalikAdeyemo for automatic speech recognition tasks.
BRIEF DETAILS: PatentSBERTa is a specialized SBERT-based model for patent analysis, mapping sentences to 768-dimensional vectors for semantic search and classification.
BRIEF DETAILS: Transformer-XL language model trained on WikiText-103, featuring relative positioning embeddings and adaptive softmax for enhanced long-context text generation.
Brief-details: A specialized AI model focused on image generation, utilizing specific trigger words. Available in Safetensors format with explicit content generation capabilities.
Brief Details: HyenaDNA small variant - genomic foundation model for 32k sequence lengths. Efficient DNA processing using Hyena operators instead of attention mechanisms.
BRIEF DETAILS: High-accuracy (98.7%) face gender classification model using Vision Transformer architecture. Trained on 102K+ images for binary man/woman detection.
Brief-details: Functionary-small-v2.4-GGUF is a compact GGUF-formatted language model by MeetKai, optimized for function calling and structured outputs.
Brief-details: A specialized image generation model using the "plstps" trigger word, focused on local feature manipulation and scene generation, particularly for café scenes.