BRIEF DETAILS: A Finnish GPT-3 language model with 186M parameters based on BLOOM architecture, trained on 300B tokens from diverse Finnish text sources. Designed for foundational text generation tasks.
BRIEF DETAILS: AutoVC is a state-of-the-art voice conversion model enabling many-to-many speaker transformations with high naturalness scores (MOS >3) and speaker-agnostic content preservation.
Brief-details: A specialized LoRA model for generating anime-style Hogwarts uniforms with house-specific details. Offers different strength variants and detailed prompting guidelines.
BRIEF DETAILS: Multilingual NLI model supporting 100+ languages, 107M parameters, fine-tuned on XNLI/MNLI datasets. Optimized for zero-shot classification with high efficiency.
BRIEF DETAILS: GPT-2 medium model (380M params) fine-tuned for sentiment analysis on SST-2 dataset. Achieves 92% accuracy with PyTorch implementation.
Brief-details: Portuguese RoBERTa model specialized for legal text processing, pretrained on MultiLegalPile dataset. Features masked language modeling and downstream task capabilities.
Brief Details: A fine-tuned DistilBERT model for token classification achieving 88.49% F1 score, optimized for knowledge graph construction with 66.4M parameters.
Brief-details: COMET-based translation evaluation model supporting 94 languages, designed to score translations against source and reference texts using XLM-R architecture.
Brief-details: LayoutLMv2-based document QA model fine-tuned for visual question answering on documents, using PyTorch and Transformers with linear learning rate scheduling
Brief-details: DeBERTa-v1-base is a 124M parameter Russian language model trained on 400GB of text, featuring 12 encoder layers and achieving strong performance on Russian SuperGLUE benchmarks.
Brief Details: DeBERTa-based hate speech detection model for social media, utilizing ensemble methods and back-translation. Achieves SOTA results with RMSE 0.766.
Brief Details: A Croatian legal language model based on RoBERTa architecture, trained on legal corpus with 111M parameters for domain-specific NLP tasks.
Brief Details: A merged text-to-image model combining AbyssOrangeMix2, pastel-mix, and other prominent models, offering three variants (A/O/P) for different artistic styles.
Brief-details: ClinicalT5-base is a specialized T5-based transformer model pre-trained on clinical text, designed for medical text generation and understanding tasks
Brief-details: A Stable Diffusion model trained on stylized photographs, emphasizing Lomography-style effects with bright colors and film artifacts. Uses "lomo style" token.
Brief-details: CafeBERT is a state-of-the-art Vietnamese language model based on XLM-RoBERTa, optimized for tasks like question answering and natural language inference.
Brief-details: Palmyra-base is a 5B parameter transformer-based language model optimized for English text generation, featuring high performance and versatility in tasks like sentiment analysis and summarization.
Brief-details: A LoRA model trained on characters from Klonoa series, featuring Klonoa, Lolo, King of Sorrow and Tat. Optimized for anime-style text-to-image generation.
Brief-details: Text-to-image AI model optimized for ultra-realistic renders, specializing in Unreal Engine-style imagery. Free API access available through stablediffusionapi.com.
Brief Details: SpeechT5 ASR is a unified speech-to-text transformer model from Microsoft, fine-tuned on LibriSpeech for accurate speech recognition tasks. MIT licensed, pytorch-based.
Brief-details: A LORA model trained on personal artwork, optimized for anime-style image generation with smaller file size and higher quality output compared to previous versions.