Brief Details: StableCoder 3B instruction-tuned model by StabilityAI for code generation. Research-only license, non-commercial use. Built for code completion and generation tasks.
Brief Details: ChilloutMix is a specialized AI model available on HuggingFace, created by AnonPerson, focusing on image generation and manipulation capabilities.
Brief-details: A complex 24B parameter Mistral-based model merged using multiple base models and techniques, optimized for roleplaying with specialized sampling presets and format requirements.
BRIEF-DETAILS: A quantized version of FluentlyLM-Prinum offering multiple GGUF variants optimized for different size/quality tradeoffs, with sizes ranging from 12.4GB to 34.9GB.
Brief Details: Lightweight Russian-English sentence embedding model, distilled from FRIDA. Features 312-dim embeddings, 7 layers & multiple prefix modes for various NLP tasks.
BRIEF-DETAILS: EtherealAurora-12B is a merged ChatML model combining Aurora-SCE-12B, Ayla-Light-12B-Stock, and EtherealLight-12B using model stock merge methodology.
Brief Details: A powerful 32B parameter model achieving near-R1 performance through SuperDistillation, excelling in math (78.1% AIME), coding (61.6%), and science (65.0%).
BRIEF DETAILS: Fine-tuned Phi-4 model optimized for Turkish speech recognition, achieving 47.57 WER after training on 600-hour dataset, significant improvement from 127.29 baseline.
Brief Details: CodeT5-based model for generating natural language comments from Python code. Fine-tuned on 2.3K samples for code documentation tasks with 128 token limit.
Brief-details: A MultinomialNB-based ATS score predictor that evaluates resume-job description matches with 89.2% accuracy using TF-IDF vectorization and NLP techniques.
Brief-details: Advanced 8B parameter GGUF implementation of YandexGPT, optimized by Vikhrmodels for enhanced performance and Italian language capabilities.
BRIEF DETAILS: 24B parameter Mistral-based merged model optimized for roleplay and chat. Features thinking capabilities via <think> tags and supports multiple chat formats including ChatML and Llama3.
BRIEF-DETAILS: TinyOctopus is a bilingual Audio-LLM combining Distil-Whisper and DeepSeek 1.5B for Arabic/English speech processing with 70.59% dialect accuracy.
Brief Details: Qwen2.5-0.5B-Portuguese model with 494M parameters, fine-tuned for Portuguese language tasks. Achieves strong performance on NLP benchmarks.
Brief Details: An 8B parameter multilingual LLM combining Llama3.2, TAIDE, and deepseek capabilities with strong performance in Chinese text analysis and vision tasks.
BRIEF-DETAILS: Optimized ONNX version of Microsoft's Phi-4 model, offering int4 quantization for CPU/GPU deployment with enhanced inference speed through ONNX Runtime.
Brief Details: Microsoft's Phi-4-mini-instruct optimized for ONNX Runtime, offering up to 12x speedup with int4 quantization. Supports 128K context.
Brief-details: DeepSeek-R1-GGML-FP8-Hybrid is a quantized version of DeepSeek using GGML format with FP8 precision, optimized for efficient inference while maintaining performance.
Brief Details: 8B parameter LLaMA-based instruction-tuned model in GGUF format, optimized for efficiency with Q8_0 quantization. Ready for llama.cpp deployment.
BRIEF-DETAILS: 12B parameter language model merged from Rei-12B and Francois-Huali-12B, optimized for roleplay and creative writing with GGUF quantization
BRIEF DETAILS: 24B parameter language model focused on enriched language and RP/ERP capabilities. Merged from pre-trained models. Stable in both English and Russian.