Brief Details: NFNet model trained on ImageNet-1k with 71.5M params. Unique normalization-free architecture achieving high performance in image classification.
Brief Details: DeciLM-7B-instruct: A 7B parameter instruction-following LLM with variable Grouped-Query Attention, fine-tuned on SlimOrca dataset. Optimized for efficiency and accuracy.
Brief Details: A powerful Vision Transformer model using SigLIP (Sigmoid Loss) for zero-shot image classification, trained on WebLI dataset with 384px resolution
Brief Details: A state-of-the-art Mixture-of-Experts LLM with 1B active and 7B total parameters, offering competitive performance with larger models
Brief Details: Efficient image classification model with 5.2M params, using innovative "ghost" feature generation for lightweight architecture. ImageNet-trained, Apache 2.0 licensed.
Brief-details: MNasNet is a compact mobile-focused CNN with 4.42M params, trained on ImageNet-1k using RMSProp optimization. Offers efficient inference for mobile vision tasks.
BRIEF-DETAILS: Stable Diffusion v1-4 text-to-image model with advanced latent diffusion capabilities, optimized for 512x512 resolution and enhanced with classifier-free guidance sampling.
Brief Details: RepVGG A2 model trained on ImageNet-1k featuring 28.3M parameters, optimized for image classification with efficient VGG-style architecture
Brief-details: Meta's Llama-3 70B model optimized with 4-bit quantization by Unsloth, offering 2x faster inference with 60% less memory usage. Supports multiple tensor types.
Brief Details: A powerful 32B parameter code-focused LLM with multiple GGUF quantizations, optimized for coding tasks and technical conversations
Brief-details: A powerful x4 image upscaling model using stable diffusion, trained on 10M high-resolution LAION images. Supports text-guided upscaling with noise level control.
Brief Details: A 7B parameter code generation model trained on 17 programming languages, featuring 16K context window and Fill-in-the-Middle objective. Built for code completion and generation tasks.
Brief-details: A multilingual-to-English translation model supporting 120+ languages, built on Marian architecture with strong performance on European languages.
Brief-details: Helsinki-NLP's English-to-Dutch translation model with impressive BLEU score of 57.1 on Tatoeba dataset, built on OPUS data using transformer architecture
Brief-details: Meta's Llama 3.1 8B model optimized by Unsloth for efficient fine-tuning, offering 2.4x faster performance with 58% less memory usage. Ideal for text generation tasks.
Brief Details: Llama-3.2-1B: Meta's 1.24B parameter multilingual LLM optimized for dialogue, featuring 2.4x faster training with Unsloth optimization and 58% reduced memory usage.
Brief Details: BEiT vision transformer model (87M params) trained on ImageNet-22k with masked image modeling, fine-tuned on ImageNet-1k for classification.
Brief Details: A multilingual NER model based on XLM-RoBERTa, fine-tuned for token classification in 5 languages with strong precision (0.92) and F1 score (0.92)
Brief Details: epiCRealism - A specialized Stable Diffusion model focused on photorealistic image generation with 59.5K+ downloads. Popular for portraits and detailed faces.
Brief-details: Experimental LoRA for FLUX.1-dev focused on enhancing photorealism, reducing shallow depth of field, and improving scene complexity with "Boring Reality" datasets.
Brief Details: DeiT base model with distillation tokens for image classification. 87.3M params, trained on ImageNet-1k. Efficient vision transformer architecture.