Brief Details: GCP - An environment monitoring model by unslothai that tracks and logs statistics to identify breaking environments and system issues
Brief Details: Mistral-7B-v0.3 is a powerful 7B parameter language model from MistralAI, offering state-of-the-art performance in natural language processing tasks.
BRIEF-DETAILS: DeepSeek-R1-Distill-Qwen-1.5B optimized with Unsloth for 4-bit quantization - enabling 70% memory reduction and 2x faster finetuning of reasoning capabilities
Brief Details: Salesforce's research-focused Mixture-of-Experts (MoE) model designed for academic purposes, emphasizing ethical AI deployment and safety considerations.
Brief-details: Meta's chameleon-7b is a 7B parameter language model designed for research purposes, focused on adaptable language understanding and generation with restricted usage under Meta's research license.
Brief Details: LWM-Text-Chat-1M: Open-source chat model based on LLaMA-2, trained on 800 filtered Books3 documents. Developed by LargeWorldModel in Dec 2023.
BRIEF-DETAILS: Facebook's SeamlessExpressive - A specialized AI model for expressive speech translation, requiring Meta approval for access via Hugging Face
BRIEF-DETAILS: A Dreambooth-trained model combining Anything and Midjourney v4.1 aesthetics, created by Joeythemonster using TheLastBen's fast-DreamBooth methodology.
Brief-details: Japanese-optimized Stable Diffusion model by rinna, featuring CreativeML OpenRAIL-M license for image generation with Japanese-specific capabilities
Brief-details: FastHunyuan is an accelerated video generation model that achieves 8X speedup over HunyuanVideo, requiring only 6 diffusion steps and supporting NF4 quantization for 20GB VRAM inference.
Brief-details: Multilingual text embedding model supporting 100+ languages, optimized for semantic search & retrieval. 12-layer architecture with 384-dim embeddings.
Brief-details: Vision Transformer model fine-tuned for facial emotion recognition with 84.34% accuracy, supporting 7 emotion classes. Built on vit-base-patch16-224-in21k architecture.
BRIEF-DETAILS: Comprehensive collection of pretrained RVC (Retrieval-based Voice Conversion) models and HuBERT models, including specialized versions for multiple languages and voice types.
Brief-details: A multilingual 8B parameter ORPO-trained LLaMA3 variant optimized for performance across 6 languages, trained on high-quality ranked responses using the Mitsu dataset.
Brief-details: Meta's latest 70B parameter LLM, part of the Llama 3 series. Advanced language model with enhanced capabilities for natural language processing and generation.
Brief-details: A collection of LoRA models curated by KirtiKousik, hosted on Hugging Face, focusing on fine-tuned language model adaptations.
Brief-details: SUPIR_pruned is a lightweight version of the SUPIR model created by Kijai, optimized for image restoration and enhancement tasks while maintaining efficiency
Brief-details: MuseTalk is a real-time lip-syncing AI model capable of 30fps+ performance, supporting multiple languages and high-quality face region modifications at 256x256 resolution.
Brief-details: A specialized model collection designed for ComfyUI workflow integration, created by xingren23 to enhance AI image generation and processing capabilities.
BRIEF DETAILS: State-of-the-art sentence embedding model achieving 64.68% avg on MTEB benchmarks, with MRL and binary quantization support. Outperforms OpenAI's text-embedding-3-large.
BRIEF DETAILS: Voice synthesis model trained on ATRI game data (112min dataset) using GPT-SoVITS architecture. Non-commercial use only. Created by 2DIPW.