Brief-details: FLUX.1-Depth-dev-lora is a specialized LoRA model for depth estimation, developed by black-forest-labs under non-commercial licensing terms.
BRIEF-DETAILS: 12B parameter GGUF-compatible language model requiring 15GB RAM, optimized for efficient deployment with various quantization options
BRIEF-DETAILS: Ministral-8B-Instruct-2410 is an 8B parameter instruction-tuned language model from Mistral AI, optimized for instruction following and general text generation.
BRIEF-DETAILS: Optimized FP8 model for ComfyUI offering faster performance and reduced memory usage, ideal for efficient image generation workflows
BRIEF-DETAILS: Mistral-Nemo-Instruct-2407 is an instruction-tuned language model from MistralAI, building on their successful Mistral-7B architecture with enhanced instruction following capabilities.
Brief-details: Gemma-2-9b-it is Google's instruction-tuned 9B parameter language model, offering balanced performance for various NLP tasks with controlled access requirements.
Brief-details: A universal multi-modal embedding model that combines text and image processing capabilities for hybrid retrieval tasks, based on BGE architecture with 768/1024 dimensions.
BRIEF DETAILS: Audio segmentation model by pyannote for speaker diarization tasks. MIT-licensed, open-source model specializing in audio segment detection.
Brief-details: QwQ-32B-bf16 is a 32B parameter model converted to MLX format, optimized with BF16 precision, designed for efficient deployment on Apple Silicon devices.
Brief Details: 24B parameter LLM based on Mistral, fine-tuned for improved prose and creativity with support for multiple chat templates including Mistral v7 Tekken.
Brief-details: A 1B parameter Japanese-English LLM trained on 10T tokens, optimized for math and coding tasks. Strong performance on JMMLU and JHumanEval benchmarks.
Brief Details: Advanced 70B parameter LLM built on Llama 3.3 architecture with DeepSeek R1 Distill base, optimized for enhanced reasoning and character insights through SCE merge methodology
Brief-details: LLaSE-G1 is a unified speech enhancement model leveraging LLaMA architecture for multiple audio processing tasks, including noise suppression, speaker extraction, and echo cancellation.
Brief-details: A 7B parameter OCR-focused language model in GGUF format, developed by Allen AI for efficient text recognition and processing tasks.
Brief-details: A bilingual 2.1B parameter Korean-English LLM developed by Kakao, optimized for compute efficiency with strong Korean language capabilities and competitive English performance
Brief Details: A 14B parameter LLM based on Qwen 2.5 architecture, optimized for reasoning and multilingual support with 128K context window
BRIEF-DETAILS: Image classification model that distinguishes between AI-generated, deepfake, and real images with 99.05% accuracy using SiglipForImageClassification architecture.
Brief-details: DRAMA-base is a 0.1B parameter dense retrieval model derived from LLMs, supporting multilingual text retrieval with flexible embedding dimensions (768/512/256).
Brief-details: Bilingual French-English 7B parameter LLM built on Qwen2.5, trained on 2K curated samples for enhanced reasoning, outperforming baseline in math tasks
BRIEF DETAILS: AlphaMaze-v0.2-1.5B is a specialized LLM trained for visual reasoning and maze-solving, built on DeepSeek-R1-Distill-Qwen-1.5B backbone with GRPO enhancement.