BRIEF-DETAILS: Amateur Photography LoRA model for Stable Diffusion, specializing in creating realistic amateur photography effects with a 0.8 weight trigger
Brief-details: Afrikaans-to-English neural machine translation model by Helsinki-NLP, achieving 60.8 BLEU score on Tatoeba test set, based on transformer architecture.
Brief-details: Canvers-real v3.9.1 is an AI model by circulus, with potential language translation capabilities based on related models, hosted on HuggingFace.
Brief Details: A specialized Korean-to-English translation model developed by circulus, available on HuggingFace, with companion models for bidirectional translation.
Brief Details: English to Korean translation model by circulus, part of the Canvers series. Features neural machine translation capabilities.
Brief Details: A specialized image generation model focused on creating artistic shower/bathing scenes, utilizing specific trigger words for soap, water, and bathing elements
Brief Details: An encoder component extracted from Whisper-large-v3-turbo, specialized for audio processing and speech recognition tasks within the Whisper architecture.
Brief-details: A lightweight cross-encoder model trained on MS Marco passage ranking, processes 9000 docs/sec with NDCG@10 of 67.43 on TREC DL19 and MRR@10 of 30.15 on MS Marco Dev.
Brief-details: Uncensored variant of DeepSeek-R1-Distill-Qwen-14B using abliteration technique. 14B parameter model focused on improved response generation without refusal patterns.
Brief-details: A merged 7B parameter LLM combining Qwen2.5 variants using TIES method, achieving 37.30% average on OpenLLM benchmarks with strong performance in IF-Eval (76.40%).
Brief-details: Virtual try-on AI model that enables clothing transfer between any two images, developed by loooooong and hosted on HuggingFace for fashion applications.
Brief-details: BERT-based sentiment analyzer for movie reviews, fine-tuned on IMDb dataset. Achieves 92.5% accuracy with float16 quantization. Handles positive/negative classifications.
BRIEF-DETAILS: 8B parameter YandexGPT model optimized for GGUF format, compatible with llama.cpp, quantized to 4-bit for efficient deployment
Brief-details: Vision-language model fine-tuned for fire detection with 99.41% accuracy, capable of classifying images into fire, smoke, or normal conditions using SiglipForImageClassification architecture
BRIEF-DETAILS: Korean language model (7B params) optimized for reasoning tasks. Strong math performance (61.82%) but struggles with chemistry. Version 2.0.3 shows improvements over v1.
Brief-details: GGUF quantized version of watt-tool-8B with multiple compression options (Q2_K to Q8_0), optimized for efficiency and performance. Flexible deployment options from 3.3GB to 16.2GB.
Brief-details: P2L (Prompt-to-Leaderboard) model that creates prompt-specific leaderboards for LLM evaluation, using Grounded Rao-Kupper regression for personalized model performance ranking.
Brief-details: MiniMath-R1-1.5B is a specialized 1.5B parameter math-focused LLM achieving 44.4% accuracy on MMLU-Pro-Math, fine-tuned from DeepSeek-R1-Distill-Qwen-1.5B.
Brief Details: An advanced vision-language model fine-tuned from SiGLIP-2 that assesses deepfake image quality, categorizing them as either high-quality or flawed.
Brief-details: A 7B parameter specialized model trained for text repair, using custom GRPO reward functions and UTF-8 corruption handling. Designed for synthesizing preference data.
Brief-details: A 12B parameter PLLuM instruction-tuned model converted to GGUF format, optimized for llama.cpp with Q5_K_M quantization, suitable for local deployment and inference.