Brief Details: A specialized pencil sketch AI model that transforms images into minimalist, impressionistic pencil drawings with emphasis on negative space and monochromatic styles
Brief Details: A merged language model combining Qwen2.5-32B with zetasepic/Qwen2.5-32B-Instruct-abliterated-v2 using TIES merge method for enhanced instruction following capabilities.
Brief-details: AI model for age group detection in facial images with 59% accuracy across 9 age groups, using Vision Transformer architecture for classification
Brief Details: InvSR is an advanced image super-resolution model that uses diffusion inversion on SD-Turbo to perform arbitrary-step upscaling, developed by Zongsheng Yue in 2024.
Brief-details: A sophisticated 32B parameter merged model combining multiple Qwen variants, optimized for diverse tasks including coding, math, and general instruction-following capabilities
BRIEF DETAILS: ESM Cambrian 600M parameter protein language model focused on biological representation learning. Part of ESM model family with non-commercial license.
Brief-details: Enhanced 4-bit quantized version of Pixtral-12B with Unsloth's dynamic quantization, offering better accuracy while maintaining low VRAM usage. Features image+text capabilities.
Brief Details: Qwen2-VL-72B is a state-of-the-art vision-language model with 72B parameters, featuring dynamic resolution handling, extended video understanding, and multilingual support.
BRIEF DETAILS: 3B parameter LLM by Nous Research, fine-tuned on Llama 3.2. Excels at function calling, structured outputs, and general assistance with strong reasoning abilities.
BRIEF DETAILS: 12B parameter GGUF quantized model with multiple compression variants (Q2-Q8), optimized for efficiency with sizes ranging from 4.9GB to 13.1GB
BRIEF-DETAILS: Earth observation foundation model with 300M parameters, using modified ViT architecture for processing satellite imagery with temporal and spatial data
Brief-details: EXAONE-3.5-32B-Instruct is a powerful bilingual (English/Korean) LLM with 32B parameters, 32K context window, and state-of-the-art performance in real-world tasks.
Brief-details: CogACT-Small is a vision-language-action model that combines DINOv2, SigLIP, and Llama-2 to predict robot actions from images and text instructions, using MIT license.
Brief-details: A minimal test model combining Falcon and Mamba architectures, designed specifically for TRL library unit testing purposes.
Brief-Details: A compact LLaMA-based causal language model designed specifically for TRL (Transformer Reinforcement Learning) library testing purposes, emphasizing minimal architecture.
Brief Details: A minimal GPTNeoX-based causal language model designed specifically for TRL (Transformer Reinforcement Learning) library testing purposes.
Brief Details: A minimal test model for TRL library development - specialized Phi3-based architecture designed for unit testing and internal validation.
Brief-details: A minimal Mistral-based causal language model designed specifically for TRL library testing purposes. Optimized for unit testing workflows.
Brief Details: A minimal test-focused Gemma-based causal language model designed specifically for TRL (Transformer Reinforcement Learning) library unit testing purposes.
Brief Details: A minimal test-focused causal language model designed specifically for TRL (Transformer Reinforcement Learning) library unit testing purposes.
Brief-details: A minimal Gemma2-based causal language model designed specifically for TRL (Transformer Reinforcement Learning) library unit testing purposes, emphasizing lightweight functionality.