Brief-details: A specialized LoRA model for SDXL focused on generating Deadpool and Wolverine images, featuring detailed character renderings with customizable artistic styles.
Brief-details: Vision Transformer (ViT-L/16) pre-trained on 450M histology images, optimized for biomarker discovery and medical image analysis with 303M parameters.
Brief Details: AI model for music genre classification across 10 genres using wav2vec2. 94.6M params, PyTorch-based, suitable for audio analysis and content tagging.
Brief Details: A 2.21B parameter visual retrieval model based on Qwen2-VL-2B-Instruct, optimized for efficient document indexing using ColBERT strategy and BF16 precision.
BRIEF-DETAILS: FastText-based language identification model supporting 2100+ languages with high accuracy, ideal for low-resource language detection
Brief-details: A 7B parameter Process Reward Model (PRM) built on Qwen2.5-Math-7B-Instruct, specialized in mathematical reasoning and code analysis with state-of-the-art performance
Brief-details: A 7.62B parameter merged LLM combining Qwen2.5 variants optimized for creative text generation and instruction following, featuring Model Stock merge technique and bfloat16 precision.
Brief-details: A 7.62B parameter LLM based on Qwen2.5, enhanced with UNA (Uniform Neural Alignment) and MGS techniques, achieving top scores in 7-8B category
BRIEF DETAILS: A 12B parameter creative writing & roleplay model built on Mistral, featuring high emotional intelligence (80 EQ score) and advanced storytelling capabilities
BRIEF DETAILS: Qwen2.5-Coder-3B-Instruct is a specialized code-focused LLM with 3.09B parameters, offering 32K context length and enhanced abilities in code generation, reasoning, and fixing.
Brief Details: A 32.7B parameter Russian-adapted Qwen2.5 model featuring improved tokenization, continued pretraining, and LEP technique for enhanced Russian text generation.
Brief Details: CogVideoX1.5-5B is a powerful 5B-parameter text-to-video generation model supporting high-resolution output (1360x768), 16fps videos with English prompts.
Brief-details: A specialized LoRA model trained on FLUX.1-dev for generating black & white coloring book-style images, featuring detailed line art and sketches with 64 network dimensions.
Brief Details: Ovis1.6-Llama3.2-3B is a 4.14B parameter MLLM optimized for edge computing, achieving SOTA performance in multimodal tasks under 4B parameters.
Brief-details: Leading 78B parameter LLM fine-tuned on ORPO dataset, achieving top rank on Open LLM Leaderboard with strong performance in reasoning and general tasks
Brief Details: NuminaMath-7B-CoT is a specialized 7B parameter LLM fine-tuned for mathematical reasoning, trained on 860k+ problem-solution pairs using chain-of-thought methodology.
BRIEF-DETAILS: A specialized LoRA model for SDXL that generates high-quality textile patterns, including Kalamkari, Ikat, and floral designs with 4K resolution output
BRIEF-DETAILS: A LoRA model for Stable Diffusion XL that specializes in Red Dead Redemption-style image generation with minimalist aesthetics and vintage effects
Brief Details: A LoRA adaptation of FLUX.1 for artistic outdoor scene generation, trained on 14 images with GArt trigger word. Optimized for 768x1024 resolution.
Brief Details: A powerful 27B parameter Bulgarian-English LLM built on Google's Gemma 2, featuring state-of-the-art performance in Bulgarian while maintaining strong English capabilities.
Brief Details: A LoRA fine-tuning of FLUX.1-dev model optimized for DALL-E style image generation, featuring 64 network dimensions and specialized for photo-realistic outputs.