BRIEF DETAILS: Vietnamese text correction model with 420M parameters, built on BARTpho-syllable architecture. Specialized in fixing spelling and grammar errors in Vietnamese text.
BRIEF DETAILS: Multilingual BERT-based model for geolocation prediction of short texts, outputting Gaussian Mixture Models with 80% accuracy within 161km
Brief-details: Character LoRA model for generating Etna from Disgaea game series. Features red-haired character with bat wings and twintails. Creative ML Open Rail-M licensed.
Brief-details: Tamil sentence similarity model built on SBERT architecture, supporting semantic comparison of Tamil text with state-of-the-art performance and NLI fine-tuning.
BRIEF DETAILS: Multi-identifier LORA model for generating anime characters with multiple costume variations. Specializes in Genshin Impact and Honkai Impact characters with detailed training methodology.
BRIEF-DETAILS: GPT-3 small variant fine-tuned on CNN Daily Mail dataset, optimized for news-style text generation with PyTorch and Transformers
Brief-details: Kukicha is a versatile anime-style text-to-image model combining Anything v4.5 and DPEP capabilities, optimized for both character generation and detailed backgrounds
🔬 DeepFake ECG Generator | Synthetic electrocardiogram generation using transformers | 203 downloads | BSD licensed | Published by deepsynthbody with support for 8-lead ECG synthesis
Brief Details: ControlNet model trained on normal map estimation, enabling precise control over image generation through surface normal information and depth mapping.
Brief-details: A specialized ControlNet model trained on HED boundary detection, enabling precise edge-based control of Stable Diffusion image generation with soft edge detection.
Brief Details: Vision Transformer (ViT) model fine-tuned for diabetic retinopathy classification achieving 72.87% accuracy, trained using PyTorch with linear learning rate scheduling.
Brief-details: Fine-tuned mpnet model optimized for sentence embeddings, trained on 7 datasets with 109M parameters. Achieves 0.385 top-1 accuracy for sentence similarity tasks.
BRIEF-DETAILS: Multilingual model for punctuation restoration, capitalization and sentence segmentation supporting 47 languages. Uses transformer architecture with 64k vocabulary.
BRIEF-DETAILS: Multimodal Chain-of-Thought (MM-CoT) model that combines vision and language for enhanced reasoning, using a two-stage training approach for scientific QA tasks.
Brief-details: A comprehensive Lora model backup collection focusing on anime/game character styles, with 80+ characters and costume concepts primarily from popular franchises.
Brief Details: BioViL-T is a 110M parameter vision-language model for chest X-rays analysis, featuring temporal multi-modal pre-training for improved radiology interpretation.
Brief Details: ResNet50 model fine-tuned on CIFAR-10 dataset achieving 94.65% accuracy, implemented in PyTorch using timm library. MIT licensed.
Brief-details: SPECTER2 is an advanced scientific paper embedding model with task-specific adapters for citation analysis, classification, and retrieval tasks.
Brief Details: GPT-R is a 60/40 blend of ppo_hh_gpt-j and GPT-JT-6B-v1, optimized for instruction-following and natural language tasks
Brief Details: Domain-specific BERT model for energy/materials text mining. Trained on 1.2M papers (2000-2021). Supports masked language modeling & next sentence prediction.
Brief-details: CLAP (Contrastive Language-Audio Pretraining) model specialized in audio-text matching and zero-shot audio classification, built on LAION-Audio-630K dataset