Brief-details: Jasmine-350M is a 350M-parameter Arabic language model designed for few-shot learning, part of the JASMINE suite of models trained on 235GB of Arabic text data.
Brief-details: A Malayalam-English translation model using ULMFit and Seq2Seq architecture, built with fastai. Pre-trained on Malayalam language data with SentencePiece tokenization (vocab size: 10k).
BRIEF DETAILS: HuBERT-large ASR model fine-tuned for Egyptian Arabic, achieving 25.9% WER. 315M params, trained on MGB-3 dataset, supports 16kHz audio.
Brief Details: A high-quality diffusion model trained on CelebA-HQ dataset for 256x256 face generation, offering multiple scheduling options for inference speed/quality trade-offs.
Brief-details: Russian language dialogue model based on GPT-2, trained for conversational AI with strong sensibleness (0.78) and specificity (0.69) metrics
Brief-details: Multilingual VITS-based TTS model supporting Luxembourgish, German, French, English, and Portuguese, trained on custom dataset with Coqui-TTS framework.
Brief-details: NLLB-200 1.3B is a large-scale multilingual translation model supporting 200+ languages, optimized for low-resource language pairs with CC-BY-NC license
Brief-details: A VAE-based model for drug molecule generation that converts SMILES representations into continuous vectors for efficient chemical compound exploration and optimization.
Brief-details: A Vision Transformer (ViT) model fine-tuned for spectrogram-based gender classification, achieving 93.66% validation accuracy with Apache 2.0 license.
Brief Details: French GPT-2 base model trained on Wikipedia and CC-100 data. Features 50k vocab BPE tokenizer. Ideal for French text generation tasks.
Brief-details: A Ridge regression model for stock price prediction focusing on Close values, achieving 0.999858 R² score. Built with scikit-learn and includes EasyPreprocessor pipeline.
Brief-details: A fine-tuned wav2vec2 model for Urdu speech emotion classification achieving 97.5% accuracy, supporting 4 emotional states: angry, happy, neutral, and sad.
Brief Details: K-12BERT is a specialized BERT model fine-tuned for K-12 education domain, trained on custom K-12Corpus using MLM objective for educational applications.
Brief-details: A fine-tuned Swin Transformer model for skin cancer classification, achieving 72.75% accuracy across 7 different skin conditions. Based on Microsoft's swin-tiny architecture.
Brief-details: NVIDIA HiFiGAN vocoder model for high-quality text-to-speech synthesis, trained on LJSpeech dataset. 85M parameters, operates at 22050Hz sampling rate.
Brief-details: SepFormer speech enhancement model trained on WHAM! dataset, achieving 13.8 dB SI-SNR. Specializes in denoising audio at 16kHz sampling rate.
Brief Details: A 248M parameter T5-based model for Chinese text spelling correction, achieving 73% F1 score on SIGHAN2015. Built by shibing624 for production use.
Brief Details: AraElectra model fine-tuned for Arabic question answering, achieving 65.12% exact match accuracy on SQuADv2. Specialized for extractive QA tasks.
Brief-details: BART-based podcast summarization model fine-tuned on Spotify dataset, achieving 2.29 train loss. Optimized for generating concise, readable podcast summaries.
Brief-details: Pruned transducer-based ASR model trained on TAL_CSASR dataset, achieving 7.15% CER on dev set and 7.22% on test set using modified beam search with model averaging.
Brief Details: A Vision Transformer (ViT) model trained on Fashion MNIST dataset achieving 94.31% accuracy with excellent precision and recall metrics across classes.