Brief-details: StdGEN is a specialized AI pipeline for generating high-quality 3D characters from single images using semantic decomposition and multiple processing stages.
Brief-details: HeAR is Google's health AI model requiring explicit user agreement through Hugging Face, focusing on healthcare applications with immediate request processing.
Brief Details: DeepSeek-V3-0324-4bit is a 4-bit quantized MLX format conversion of DeepSeek-V3, optimized for efficient deployment using the MLX framework.
BRIEF-DETAILS: World's first generative AI model for creating SWF video games and animations through byte-level generation, developed by SamsungSAILMontreal
BRIEF-DETAILS: 24B parameter Mistral model converted to GGUF format, optimized with Q6_K quantization for efficient local deployment via llama.cpp
Brief Details: DiT-based real-time video generation model capable of 24 FPS at 768x512 resolution. Supports text-to-video and image-to-video generation with CUDA acceleration.
Brief-details: EtherealAurora-12B-v2 is a merged ChatML model combining EtherealAurora-12B and Mistral-Nemo-Instruct using SLERP methodology with specialized weighting parameters.
Brief-details: MambaVision-L3-512-21K is a groundbreaking hybrid vision model combining Mamba and Transformer architectures, achieving 88.1% Top-1 accuracy on ImageNet-1K with 739.6M parameters
BRIEF DETAILS: Advanced 7B-parameter multimodal LLM fine-tuned for detailed image captioning, based on Qwen2.5-VL with relaxed constraints for natural descriptions
Brief-details: KDTalker - An advanced AI model for generating diverse and accurate audio-driven talking portraits using implicit keypoint-based spatiotemporal diffusion methods.
BRIEF-DETAILS: Qwerky-72B is a RWKV-based linear attention model converted from Qwen 2.5 72B, offering 1000x improved inference costs while maintaining competitive performance.
Brief-details: Specialized RAG-optimized LLM built on Qwen2.5 14B, fine-tuned for precise document retrieval and structured JSON output with source attribution and grounded answers.
Brief-details: SPIDER-thorax-model is a specialized deep learning model for thoracic pathology classification, achieving 96.2% accuracy across 14 distinct tissue classes with 78,307 central training patches.
BRIEF DETAILS: FlexWorld is an advanced AI model for transforming static images into dynamic 3D scene videos through flexible view synthesis and progressive scene expansion.
BRIEF-DETAILS: A 395M parameter English embedding model with 128k context, supporting both single & multi-vector embeddings. Achieves SOTA on LongEmbed benchmark with 0.86 score.
Brief Details: Enhanced 1.5B parameter LLM using RL, achieving 80% accuracy on AMC23 and 46.7% on AIME24, trained efficiently on 4 A40 GPUs.
Brief-details: A 7.8B parameter AWQ-quantized language model optimized for reasoning tasks, featuring 32K context length and strong performance in math/coding benchmarks.
BRIEF DETAILS: Evil-tuned variant of Gemma 27B optimized for GGUF format. Features vision capabilities and modified personality traits. Parameter size: 27B.
BRIEF-DETAILS: 32B parameter transformer model fine-tuned with RFT, optimized for task adaptability with minimal labeled data and dynamic response adjustment.
BRIEF DETAILS: 0.5B parameter draft model for speculative decoding with larger Mistral models. Trained on 24M tokens across 6 languages for efficient text generation.
Brief-details: EXAONE-Deep-32B-AWQ is a powerful 32B parameter LLM optimized for reasoning tasks, featuring 4-bit quantization and 32K context length