Brief Details: BERT-based sentiment analysis model fine-tuned on IMDB dataset, achieving 89.09% accuracy. Optimized for movie review classification.
Brief-details: Piper-voices is a specialized collection of voice models for the Piper text-to-speech system, offering customizable voice synthesis capabilities for developers.
Brief Details: A specialized AI model by jomcs released in February 2023, hosted on HuggingFace, potentially focused on continuous/dream-like content generation based on its naming.
Brief-details: Mistral-Large-Instruct-2411 is Mistral AI's advanced instruction-tuned language model focusing on high-quality responses and commercial applications
Brief Details: Legacy 13B parameter Vicuna model optimized with GGML 4-bit quantization, based on LLaMA architecture for efficient text generation and inference.
Brief-details: A 4-bit quantized version of Alpaca-native, optimized for CPU usage requiring only 5GB RAM. Efficient local deployment of LLaMA-based conversational AI.
Brief Details: Chinese OCR model collection combining cnstd (text detection) and cnocr (text recognition) models, developed by breezedeus for Chinese document processing
BRIEF DETAILS: 32B parameter LLM quantized to 4-bit using GPTQ, optimized for efficient deployment with GPTQModel framework. Features true sequential processing and dynamic group quantization.
Brief Details: A 12B parameter GGUF-quantized language model offering multiple compression variants from 4.9GB to 13.1GB, optimized for efficient deployment.
BRIEF-DETAILS: A 12B parameter GGUF-quantized language model offering multiple compression variants from 4.9GB to 13.1GB, optimized for different performance/quality tradeoffs.
Brief-details: Enhanced 35B parameter LLM optimized for reasoning & creative tasks. Features "Cubed" method with multiple conclusion layers, ChatML format required. No system prompt needed.
Brief-details: A comprehensive collection of GGUF quantizations of the Rombo-LLM-V3.1 32B model, offering various compression levels from 9GB to 35GB with different quality-size tradeoffs.
BRIEF DETAILS: Diogenes-12B-GGUF: A 12B parameter model with multiple quantization options (Q2-Q8), optimized for efficiency with sizes ranging from 4.9GB to 13.1GB.
BRIEF-DETAILS: A creative LoRA model trained under Glif Loradex program, specializing in generating quirky, imaginative scenes with violet vector styling and specific trigger word.
Brief-details: A fine-tuned version of Mistral-24B optimized for creative writing, showing significant improvements in narrative quality metrics and natural dialogue generation across multiple languages.
BRIEF-DETAILS: Sanskrit translation model based on Qwen-7B architecture, offering multiple GGUF quantization options from 3.1GB to 15.3GB for efficient deployment
Brief Details: ResNet-50 model fine-tuned for 102-class flower classification, achieving 92.8% accuracy. Optimized with FP16 quantization for efficient inference.
Brief-details: LADDER is a novel approach for LLMs to improve through recursive problem decomposition, enabling self-learning and enhanced problem-solving capabilities.
Brief Details: Japanese language instruction-tuned 3B parameter model optimized in GGUF format, built for efficient local deployment with LLAMA.cpp
Brief-details: Uncensored vision-language model based on IBM's Granite Vision 3.2 (2B params), modified using abliteration technique to remove refusal behaviors while maintaining vision capabilities.
BRIEF-DETAILS: Quantized version of BigKartoffel-mistral-nemo-20B offering multiple GGUF variants optimized for different size/performance tradeoffs, from 4.8GB to 16.9GB