Brief-details: Zireal-0-GGUF is a quantized version of Zireal-0 offering multiple compression variants from Q2_K to Q8_0, with sizes ranging from 244GB to 713GB, optimized for different performance/quality tradeoffs.
Brief-details: Fine-tuned Wav2Vec2 model for speech emotion recognition. Achieves 65% accuracy across 8 emotion classes. Trained on RAVDESS dataset with 1,440 samples.
Brief-details: A specialized deep learning model for colorectal pathology classification, achieving 91.4% accuracy across 13 distinct tissue classes with support for 1120×1120 patch analysis.
Brief Details: ResNet-18 model fine-tuned for 9-category sports classification with 92.4% accuracy. Handles cricket, archery, football, basketball, tennis, baseball, hockey, golf, and boxing images.
Brief-details: Word2Vec-based sentence similarity model using 300-dimensional embeddings and cosine similarity metrics. Trained for semantic text comparison with Gensim framework.
Brief-details: GGUF-quantized version of SD3.5 with optimized encoders and VAE, featuring improved loading times and fp32 precision for enhanced image generation quality
Brief-details: Light-R1-32B-GGUF is a quantized version of the Light-R1-32B model, offering multiple compression variants from 12.4GB to 34.9GB with different quality-performance tradeoffs.
BRIEF-DETAILS: 8B parameter Llama-based model available in various GGUF quantizations (Q2-Q8) for efficient deployment, with Q4_K_M and Q4_K_S versions recommended for optimal performance balance.
BRIEF-DETAILS: A LoRA model trained using Replicate's Flux trainer, designed for use with diffusers library and Canopus-LoRA-Flux-UltraRealism-2.0 base model. Uses TOK trigger word.
Brief-details: Granite-3.2-8B-Instruct is an 8B parameter LLM optimized for reasoning tasks, supporting 12 languages and featuring controllable thinking capabilities, built by IBM's Granite team.
BRIEF DETAILS: 3.2B parameter Jopara language instruction-tuned Llama model with multiple GGUF quantization options (Q2-Q8) for efficient deployment
Brief Details: Fine-tuned Flan-T5-Base model for next-line prediction, optimized with FP16 quantization. Achieves perplexity of 23, trained on OpenWebText-10k dataset.
BRIEF-DETAILS: A 20B parameter merged LLM using Passthrough method on mistral-nemo-kartoffel-12B model layers, optimized for enhanced performance through strategic layer combinations.
BRIEF-DETAILS: A 7B parameter GGUF-quantized model optimized for scientific literature processing, offering multiple quantization options from 3.1GB to 15.3GB for different performance needs.
BRIEF DETAILS: GGUF quantized version of Cygnus-II-14B with multiple compression variants (3.7GB-12.2GB). Features imatrix and static quantization options optimized for different size/performance tradeoffs.
Brief-details: A 9B parameter GGUF quantized language model offering multiple compression variants from 3.9GB to 18.6GB, with recommended Q4_K variants for optimal performance balance.
BRIEF-DETAILS: 8B parameter GGUF quantized model with multiple compression variants (Q2-Q8), optimized for efficient deployment with sizes ranging from 3.3GB to 16.2GB
Brief Details: Fine-tuned 8B parameter medical reasoning model based on DeepSeek-R1, optimized with QLoRA and Unsloth for enhanced medical Chain-of-Thought capabilities.
Brief-details: Italian language model fine-tuned from Qwen2.5-Instruct, trained for 2 epochs on WiroAI/dolphin-r1-Italian dataset with improved reasoning capabilities.
Brief Details: AWQ-quantized version of InternVL2_5-4B, optimized for vision-language tasks with minimal performance loss (82.3% on MMBench). Supports multi-modal chat and video analysis.
BRIEF DETAILS: Babel-9B-Chat-i1-GGUF is a quantized version of the Babel-9B-Chat model, offering various compression options from 2.3GB to 7.5GB with different quality-performance tradeoffs.