Brief Details: DiarizationLM-8b-Fisher-v2 is an 8B parameter LLM specialized for speaker diarization post-processing, built on Llama-3 architecture with impressive WDER metrics.
Brief Details: A 70B parameter uncensored LLaMA 3.1 variant created through LoRA abliteration, offering enhanced freedom in text generation while maintaining quality.
Brief-details: A specialized ControlNet Canny model trained on FLUX.1-dev, enabling precise edge-guided image generation with high-quality artistic outputs and cinematic effects.
Brief-details: Text-to-image model optimized for fast inference (6-8 steps) with FP8 precision, featuring merged FLUX.1-dev and FLUX.1-schnell architectures. Non-commercial license.
Brief-details: InternLM2.5-1.8B is an advanced language model with significantly improved reasoning capabilities, built on InternLM2 architecture with synthetic data training.
Brief Details: Korean-optimized 8B parameter LLM based on Meta's Llama 3.1, fine-tuned on Korean datasets for enhanced conversational AI capabilities.
Brief-details: A 4-bit quantized version of Llama 3.1 (8B parameters) fine-tuned using Unsloth and TRL library, optimized for efficient text generation with apache-2.0 license. Currently in training phase.
Brief Details: A 12B parameter roleplay-focused LLM built on Mistral-Nemo, featuring multiple AI personas and optimized for character interactions using ChatML format.
BRIEF DETAILS: A specialized inpainting model based on Kolors-Basemodel, offering superior image completion capabilities with support for both Chinese and English prompts. Notable for its low artifact scores and high visual appeal.
Brief Details: RoBERTa-based language model for Kazakh (355M params) trained on multidomain dataset. Specializes in masked language modeling with impressive download count of 4000+.
Brief-details: A Stable Diffusion LoRA model specialized in generating Pixar-style 3D images, trained on SDXL base 1.0 with AdamW optimizer and constant LR scheduler. Features 64 network dimensions.
Brief Details: 8B parameter biomedical LLM based on Llama3, specialized through instruction pre-training. Outperforms larger models in medical tasks.
Brief-details: A 109M parameter retrieval model for zero-shot commonsense QA, built on e5-base-v2. Specializes in example-based retrieval augmentation for LLMs.
Brief Details: CogVLM2-Video-Llama3-Chat: A 12.5B parameter video understanding model achieving SOTA performance in video QA tasks, supporting minute-long video analysis.
Brief Details: Phi-3.1-mini-4k-instruct-GGUF is a 3.8B parameter dense decoder-only Transformer model with 4K context, optimized for reasoning and instruction following.
BRIEF DETAILS: Trendyol LLM v2.0: An 8B parameter Turkish language model based on Llama-3, fine-tuned on 13B tokens for chat interactions. Features BF16 precision and safe deployment focus.
Brief-details: A LoRA model for SDXL focused on photorealistic image generation, featuring advanced noise processing and optimized for portrait and human subjects with high-quality details.
Brief-details: Quantized MobileNet-based pose estimation model optimized for mobile devices, achieving fast inference (0.3-13ms) with INT8 precision on Qualcomm NPUs.
Brief-details: A fast and efficient image-text model achieving 67.8% ImageNet accuracy, 4.8x faster than ViT-B/16 while being 2.8x smaller - ideal for mobile applications.
Brief Details: 70B parameter reward model trained on HelpSteer2 dataset, evaluating responses on 5 key attributes: helpfulness, correctness, coherence, complexity, and verbosity.
Brief Details: 8B parameter multilingual LLM focused on Spanish/English, built on Llama-3, trained on 24 datasets with capabilities in RAG, function calling, and code assistance.