BRIEF DETAILS: VGG16-based image classification model fine-tuned for binary cat/dog detection, offering high accuracy through transfer learning and easy integration.
Brief Details: A RoBERTa-based model fine-tuned for stance detection on climate change discussions from Twitter, developed by CardiffNLP team.
Brief-details: Spanish Longformer model trained on National Library of Spain data, supports 4096 token sequences, optimized for MLM tasks and downstream NLP applications
Brief-details: Comprehensive collection of GGUF quantizations for Llama-3.3-70B-Instruct, offering various compression levels from 141GB to 16GB with different quality-size tradeoffs
Brief-details: MV-Adapter is a powerful tool for generating multi-view images from text/image inputs, compatible with various base T2I models and supporting 768x768 resolution outputs.
Brief Details: A Russian scientific language model developed by MSU Lab, likely optimized for scientific text processing and analysis in Russian language contexts.
Brief-details: DeepSeek-V3 is a 671B parameter MoE model with 37B activated parameters, featuring FP8 training and 128K context length, achieving SOTA performance in reasoning and specialized tasks.
Brief-details: A 0.5B parameter code-specialized LLM based on Qwen2.5, optimized for 4-bit quantization with 32k context window and improved code generation capabilities.
Brief-details: Gemma-7b is Google's powerful 7B parameter language model requiring Hugging Face authentication and license acceptance for access. Built for versatile NLP tasks.
BRIEF DETAILS: A comprehensive collection of GGUF quantizations for the 32B parameter FuseO1-DeepSeekR1 model, offering various compression levels from 9GB to 65GB with different quality-size tradeoffs.
Brief Details: ONNX-optimized CPU variant of BGE-M3 embedding model, designed for efficient text embeddings on CPU hardware with O2 optimizations.
BRIEF-DETAILS: BGE reranker model optimized for CPU inference using ONNX, featuring M3 architecture and O3 optimizations for efficient text reranking
BRIEF-DETAILS: Text-to-Speech model collection for KoboldCpp requiring OuteTTS and WavTokenizer models for complete functionality. Specialized for voice synthesis.
BRIEF-DETAILS: Distilled 14B parameter Qwen model optimized for GGUF format with multiple quantization options, ranging from 3.7GB to 12.2GB sizes, featuring imatrix quantization improvements
BRIEF DETAILS: Mistral Small 24B Instruct is a powerful 24B parameter LLM with 32k context window, multilingual capabilities, and Apache 2.0 license. Runs on RTX 4090/32GB RAM when quantized.
Brief Details: DeBERTa-v3-large fine-tuned for textual entailment, specializing in 2-way classification (entail/contradict) on MNLI dataset.
Brief-details: A text-to-image generation model specialized in stylized artwork, featuring customizable parameters for image dimensions, sampling steps, and various enhancement options.
Brief-details: A specialized classifier model designed for NPS (Net Promoter Score) tag classification, helping businesses analyze customer feedback and sentiment categories.
BRIEF-DETAILS: TinyLlama chat model optimized with 4-bit quantization, offering 3.9x faster performance and 74% less memory usage via Unsloth's optimization framework.
Brief-details: A photorealistic AI model by claudfuen, designed for generating highly realistic images. Available on Hugging Face, this model focuses on authentic visual representation.
Brief Details: An 8B parameter LLaMA-based model implementing orthogonalization techniques, optimized for research applications with EXL2 capabilities