BRIEF-DETAILS: mbart-large-cc25-ar-en is a specialized Arabic-to-English translation model, fine-tuned on OPUS corpus. Not production-ready, suitable for research/testing.
BRIEF DETAILS: Arabic GPT2-small model trained on Wikipedia data (900MB). Achieves 72.19 perplexity. Suitable for text & poetry generation demos.
Brief Details: ArcaneGANv0.4 is a GAN-based image transformation model designed to convert real images into Arcane animation style, developed by akhaliq and hosted on HuggingFace.
Brief-details: Vision Transformer model fine-tuned for cat vs dog classification, achieving 98.83% accuracy. Built on ViT-base architecture with impressive performance.
BRIEF DETAILS: Small-scale instruction-tuned T5 model trained on 757 NLP tasks. Specializes in following plain language instructions for various NLP operations.
Brief-details: A powerful Japanese language model that enhances word and entity understanding through knowledge-based embeddings, showing superior performance on JGLUE benchmarks
Brief-details: Optimized Qwen2-VL-7B vision-language model using Unsloth's Dynamic 4-bit quantization, offering 40% less memory usage and 1.8x faster performance
Brief-details: RUPunct_big is a Russian language punctuation restoration model, optimized for processing text without punctuation to add proper marks and capitalization.
Brief-details: FLAN-T5 Large model fine-tuned on DuoRC ParaphraseRC dataset for question answering tasks, specialized in reading comprehension and paraphrasing
Brief-details: A fine-tuned variant of FLAN-T5-large specialized for question answering tasks with separated facts, optimized for the QASC dataset format
Brief-details: BigBird-RoBERTa model fine-tuned on MNLI dataset for natural language inference tasks, combining efficient attention mechanism with robust language understanding capabilities.
BRIEF DETAILS: Vision transformer-based model for predicting image aesthetics scores, built on ViT-Large architecture with 14x14 patch size
Brief Details: A compact test version of mT5 model created for experimental purposes, featuring randomized weights and minimal architecture for testing workflows.
Brief-details: Fine-tuned Qwen2-7B model optimized for improved performance across benchmarks, featuring ChatML support and GGUF quantization options. Strong scores in IFEval and MMLU-PRO.
Brief Details: A lightweight model by katuni4ka focused on random flux filling operations, likely useful for generative tasks and pattern completion.
Brief-details: FLAN-T5 Large model fine-tuned on GLUE WNLI task, optimized for natural language inference with LoRA adaptation method
Brief-details: A comprehensive 36B parameter LLaMA-based model with multiple quantization options from Q8_0 to IQ2_XS, offering flexible performance-size tradeoffs
BRIEF-DETAILS: VITS-based model specialized in age and gender detection from voice inputs, developed by circulus and hosted on HuggingSpace
Brief Details: Canvers-story v3.9.1 is a story-focused AI model by circulus, hosted on HuggingFace, designed for narrative generation and creative writing tasks.
Brief-details: A specialized AI model by circulus focusing on Disney-style image generation, built on Stable Diffusion architecture with custom fine-tuning for animated content creation.
Brief-details: A merged model combining ControlNet and Anything v3.0, enabling precise control over image generation while maintaining Anything v3.0's artistic style capabilities