Brief Details: ViT-based deepfake detection model achieving 98.84% accuracy, specializing in binary classification between real and AI-generated images. Built on google/vit-base-patch16-224-in21k architecture.
BRIEF-DETAILS: Specialized Qwen-7B fine-tuned model for cryptocurrency and Web3, offering DeFi expertise, smart contract analysis, and blockchain development guidance.
Brief-details: HealthGPT-L14 is a specialized multi-modal medical AI model developed by lintw, designed for unified healthcare tasks and medical analysis.
Brief-details: A specialized software solution for carbon credit trading, featuring blockchain integration, smart contracts, and decentralized trading mechanisms to enhance market transparency and efficiency.
BRIEF DETAILS: A 1.5B parameter draft model optimized for speculative decoding, quantized to 6.5-bit precision. Particularly effective when paired with larger models for enhanced performance.
BRIEF-DETAILS: 12B parameter GGUF model with multiple quantization options (Q2-Q8), optimized for efficient deployment and good quality/size trade-offs
Brief Details: GGUF quantized version of FluentlyLM-Prinum with multiple quality variants (Q2-Q8), offering flexible balance between model size (12.4-34.9GB) and performance.
Brief-details: A LoRA adapter for llama.cpp converted to GGUF format, designed to enhance LLaMA 3.2 1B model with Claude-3.5-Sonnet distilled capabilities
BRIEF-DETAILS: Weighted/imatrix quantized version of Nomad 12B V6, offering multiple GGUF variants optimized for different size/performance trade-offs, from 3.1GB to 10.2GB
BRIEF-DETAILS: Qwen2.5-14B-HyperMarck GGUF quantized variants offering multiple compression levels (3.7GB-12.2GB) with imatrix optimization for efficient deployment.
Brief Details: A Stable Diffusion model by panchoavila that uses "blow" as a trigger word for image generation. Available in Safetensors format.
Brief-details: Evo2_40b is a cutting-edge DNA language model with 50 layers and 40B parameters, capable of processing sequences up to 1M tokens in length
BRIEF DETAILS: 12B parameter language model optimized for storytelling and creative writing. Merged LLM with enhanced Russian language support and flexible response patterns across ChatML/Mistral formats.
Brief Details: EQ-VAE-EMA is a regularized VAE model that enhances image generation by enforcing equivariance in latent space for scaling and rotation transformations.
Brief-details: Cybersecurity-focused LLM based on Llama 3.1, pre-trained on 2.77B cybersecurity corpus, showing 14.84% improvement in security benchmarks
BRIEF DETAILS: A specialized reasoning model built on phi-4 that excels at toxicity detection, hallucination identification, and RAG relevance assessment, providing structured binary classifications.
BRIEF-DETAILS: A comprehensive collection of GGUF quantizations of 0x-lite model, offering various compression levels from 29GB to 5GB with different quality-size tradeoffs.
BRIEF-DETAILS: Optimized DeepSeek v2.5 model with dynamic quantization, offering multiple compression levels (49-97GB) while maintaining performance through strategic layer compression.
Brief-details: XLM-RoBERTa model fine-tuned for hallucination detection in LLM outputs, part of SemEval 2025 Task3, using ADAMW optimizer and linear learning rate schedule.
Brief Details: OTIS is an advanced anti-spam AI model designed to detect unwanted content, achieving 0.21 eval loss after 10 epochs of training with BSD-3 license.
Brief-details: Medical imaging AI model trained on CheXpert and MIMIC-CXR datasets for generating chest X-ray impressions, focused on radiological analysis and reporting.