Brief-details: CLIP-based Vision Transformer (ViT-L/14) model for zero-shot image classification, trained on 336x336 images using OpenAI's architecture. Research-focused.
BRIEF DETAILS: A weighted/imatrix quantized version of DeepSeek-R1-Distill-Qwen-32B offering multiple GGUF variants optimized for different size/performance tradeoffs, ranging from 7.4GB to 27GB.
Brief Details: Qwen2.5-7B optimized with Unsloth's 4-bit quantization, offering 2x faster training, 60% less memory usage, and full 32K context support.
BRIEF-DETAILS: 7th_test is a Stable Diffusion model optimized for DPM++ 2M Karras sampling with CFG 7±5 and 25 steps, featuring specialized negative prompting.
Brief Details: Emotion detection model based on RoBERTa, trained on 58k Reddit comments to classify 28 emotions. 49.3% F1 score, ideal for sentiment analysis.
Brief-details: NSFW-3B is a 3 billion parameter language model by UnfilteredAI, specialized in NSFW content generation with related models for specific content types.
Brief-details: AniPortrait is an innovative AI framework for creating photorealistic portrait animations driven by audio input or reference videos, developed by Tencent Games Zhiji team.
Brief-details: High-performing 14B parameter LLM with DPO training, achieving 7.62 on MT-Bench. Ranks #1 non-base model of its size on HuggingFace Open LLM Leaderboard.
Brief-details: StarPII - A specialized NER model by bigcode for detecting Personal Identifiable Information (PII) in code datasets, focused on data privacy and security
Brief-details: Quantized BERT model fine-tuned for movie recommendations, optimized with FP16 quantization. Achieves 0.84 NDCG score on genre-based recommendations.
Brief Details: A LoRA fine-tuned version of Mistral-7B-Instruct focusing on agent tool calls, built on the powerful Mistral-7B base architecture.
Brief-details: A spaCy-based job recommendation system using NLP and graph analysis to match resumes with job postings, achieving 85.6% accuracy in relevant matches.
BRIEF DETAILS: A 9B parameter quantized Gemma model optimized for coding tasks, offering multiple GGUF variants from 3.9GB to 18.6GB with different quality-size tradeoffs
BRIEF DETAILS: A merged 8B parameter LLaMA 3.1 model combining Dolermed and Smarteaz variants, showing strong performance in instruction-following (IFEval: 79.78)
BRIEF DETAILS: An 8B parameter Llama 3.1-based merged model combining medical and general knowledge streams, achieving 27.38 avg benchmark score with strong IFEval performance.
BRIEF DETAILS: 8B parameter Llama 3.1-based merged model optimized for smart reasoning tasks. Features high IFEval score (81.51) and specialized performance across various benchmarks.
Brief-details: 32B parameter Qwen2.5-based LLM optimized for character/scenario portrayal with strong prose generation and dialogue capabilities, featuring unique thinking patterns.
Brief Details: Ursa_Minor - An AI model by Sculptor-AI utilizing PEFT 0.14.0 framework, with limited public documentation but potential for extensible applications.
Brief-details: Latxa-Llama-3.1-8B-Instruct is a specialized Basque language model based on Llama-3.1, trained on 4.2B tokens for enhanced performance in low-resource language tasks.
Brief-Details: Qwen 0.5B model fine-tuned for Python code generation, leveraging Unsloth optimization for 2x faster training speed. Based on Qwen2.5-0.5B.
BRIEF DETAILS: A Korean speech-to-text model fine-tuned from Phi-4-multimodal-instruct, specialized in ASR and translation tasks with significant improvements on zeroth-test benchmarks.