Brief Details: DeepSeek-R1-Medical-COT specialized GGUF model for medical reasoning and chain-of-thought prompting, optimized for healthcare applications
BRIEF: Japanese language instruction-tuned 13B parameter LLM model converted to GGUF format for efficient local deployment with llama.cpp, optimized for Japanese text generation.
BRIEF DETAILS: 1B parameter bilingual (EN/JP) model using hybrid Samba architecture. Pre-trained on 4T tokens, combines Mamba SSM with sliding window attention. Apache 2.0 licensed.
BRIEF-DETAILS: Quantized Pythia-410M model fine-tuned for Q&A tasks. Features float16 precision, 0.56 accuracy, trained on ambig_qa dataset with focus on chatbot applications.
Brief-details: Lumina Image 2.0 repackaged for ComfyUI - optimized image generation model with enhanced compatibility and workflow integration for the ComfyUI framework.
Brief Details: A quantized version of DeepSeek-R1-Distill-Qwen-1.5B-uncensored offering multiple GGUF formats, optimized for different size/quality trade-offs ranging from 0.9GB to 3.7GB.
BRIEF DETAILS: Compact BERT-small model for Czech language, pre-trained with RetroMAE objective. Optimized for semantic embeddings and NLP tasks like similarity search and retrieval.
Brief-details: EVA02 base model with CLIP training, patch size 16, 224x224 input resolution. Merged from multiple checkpoints for enhanced vision tasks.
Brief-details: Qwen1.5-1.8B-Chat-GGUF is a compact chat model with 1.8B parameters, offering multiple quantization options and optimized for edge deployment with LlamaEdge compatibility.
Brief-details: Voice generation model inspired by Skyrim game voices, created by tylermaister and hosted on HuggingFace, designed for game-like voice synthesis
Brief-details: A specialized image generation model focused on clean, high-quality outputs with NSFW capabilities. Features strong performance with anime-style artwork and scenic compositions.
Brief-details: Fine-tuned wav2vec2-xls-r-300m model for Albanian speech recognition, trained on Common Voice dataset with 30 epochs and Adam optimizer.
Brief-details: Large Vision Transformer model with registers for improved attention maps and better performance, built by Facebook using DINOv2 architecture.
Brief-details: KoBART-base-v2 is a Korean BART model enhanced with chat data for improved long sequence handling, achieving 90.1% accuracy on NSMC tasks
Brief Details: PolyMorphMix is a versatile generative AI model from digiplay, designed for image generation with adaptable style mixing capabilities and high-quality outputs.
BRIEF DETAILS: Quantized version of Janus-Pro-7B-LM model offering multiple compression formats from 2.8GB to 13.9GB, optimized for different speed/quality tradeoffs
Brief Details: PerfectDeliberate-Anime_v2 is a specialized AI model for anime-style image generation, built by digiplay, focusing on high-quality anime art creation.
BRIEF-DETAILS: Fantasy-focused AI image generation model by digiplay, optimized for anime-style artwork with emphasis on platinum blonde characters and cinematic lighting effects.
BRIEF DETAILS: Megrez-3B-Omni is a multi-modal language model supporting text, image, and audio understanding. Features 4B parameters, achieves SOTA performance on OpenCompass (66.2), and maintains strong language capabilities.
Brief-details: Open-source 7B parameter LLM based on Mistral, trained on Orca-style dataset. Uncensored model with strong performance (55.85 avg on benchmarks). Commercial-use friendly.
Brief-details: DeepSeek-R1-Medical-COT is a specialized medical LLM model finetuned from DeepSeek's R1 architecture, optimized using Unsloth for 2x faster training performance.