Brief-details: A fine-tuned Hindi sentiment analysis model based on Twitter-XLM-RoBERTa-base, achieving 0.89 F1-score for Hindi text classification in Devanagari script.
BRIEF-DETAILS: A specialized Stable Diffusion model trained on E621 data, focused on artistic content generation. Features CreativeML OpenRAIL-M license and Discord bot integration.
BRIEF DETAILS: Image classification model trained on 30 real estate categories using Vision Transformer architecture. Achieves 66.67% accuracy for identifying property spaces and features.
Brief Details: SecurityLLM is a 7.24B parameter cybersecurity-focused LLM built on Mistral, offering expert guidance across 30+ security domains including threat analysis, compliance, and incident response.
Brief-details: Core ML implementation of Depth Anything V2 model for depth estimation, optimized for Apple devices. 24.8M parameters with F16/F32 variants.
Brief-details: A fine-tuned Whisper model optimized for Chinese ASR with enhanced punctuation capabilities, achieving CER as low as 2.945% on AISHELL-1 benchmark.
Brief Details: Ukrainian text editing model with 1.23B params, fine-tuned from mt0-large. Specializes in paraphrasing, simplification, and grammar correction.
BRIEF DETAILS: A 1.1B parameter Vision Transformer for anime image quality assessment, capable of analyzing 1024x1024 images with aesthetic scoring functionality.
Brief-details: EfficientViT-SAM is an accelerated Segment Anything Model offering high performance with reduced latency, available in multiple variants (L0-XL1) for different resolutions and speed requirements.
Brief-details: mPLUG-Owl3-7B is a state-of-the-art multi-modal LLM with 8.07B parameters, optimized for long image sequence understanding with 6x faster processing using Hyper Attention.
Brief-details: An 8B parameter GGUF version of Llama 3.1 Tulu, optimized for instruction-following, math, and general tasks with strong performance metrics.
BRIEF DETAILS: 70B parameter instruction-following model based on Llama 3.1, optimized for diverse tasks including MATH and GSM8K. Strong performance in safety and reasoning tasks.
Brief-details: A 7.62B parameter vision-language model based on Qwen2-VL-7B-Instruct, optimized for GGUF format with support for image understanding and conversational AI.
Brief Details: A 70B parameter LLaMA-based model optimized for creative writing and roleplay, featuring reduced repetition and enhanced creativity through specialized training.
Brief Details: A specialized 8B parameter LLaMA model fine-tuned for roleplaying conversations, trained on 13K detailed dialogue pairs with rich character interactions.
Brief-details: BgGPT-Gemma-2-9B-IT is a Bulgarian-English LLM with 9.24B parameters, built on Google's Gemma 2, optimized for Bulgarian while maintaining English capabilities
BRIEF DETAILS: Qwen2.5-Coder 32B instruction-tuned model with multiple GGUF quantizations, optimized for code generation and chat. Features reduced censorship and sizes from 9.96GB to 65.54GB.
BRIEF-DETAILS: A 22B parameter GGUF model fine-tuned from Mistral-Small-Instruct with Alpaca prompt format support and enhanced instruction following capabilities
Brief Details: 4-bit quantized version of Qwen2.5-72B-Instruct using AutoRound symmetric quantization, optimized for efficient deployment while maintaining accuracy
Brief-details: A fine-tuned geospatial foundation model for predicting land surface temperature at 30m resolution using satellite imagery and climate data, helping understand urban heat dynamics.
Brief Details: A 14.8B parameter language model based on Qwen2.5-14B, optimized for conversation and text generation using Axolotl training framework.