Brief Details: A 7B parameter GGUF-quantized roleplay model based on Mistral, offering multiple quantization versions from 2.8GB to 14.6GB with varying quality-size tradeoffs.
Brief-details: An 8B parameter merged language model created using LazyMergekit, combining three base models with specific weight and density configurations for enhanced text generation capabilities.
Brief-details: A fine-tuned TrOCR large model specialized for printed text recognition, particularly optimized for CMC7 and MICR formats with 609M parameters.
BRIEF DETAILS: PyTorch-based prompt generator (559M params) specialized for text-to-image applications. MIT licensed, optimized with safetensors.
BRIEF-DETAILS: LSTM-based stock price prediction model trained on Google stock data, featuring 5 layers, 64 hidden dimensions, and 500 training epochs
Brief-details: ID-conditioned face generation model that creates diverse, consistent photos from ArcFace embeddings. Built on SD-1.5 with CLIP integration & pose control.
Brief-details: OneKE is a bilingual (Chinese-English) large language model specialized in knowledge extraction tasks, supporting NER, RE, and event extraction with schema-based capabilities.
Brief-details: Real-time pose estimation model optimized for mobile devices, featuring dual-model architecture with 815K and 3.37M parameters for detection and landmark tracking respectively.
Brief-details: A 4B-parameter Vietnamese language model fine-tuned for chat applications, trained on 102B tokens with 8192 context length and 20K vocabulary.
Brief Details: InstructIR is a state-of-the-art image restoration model that follows human instructions to enhance and restore degraded images with impressive results.
Brief-details: Kunoichi-7B is a powerful 7.24B parameter general-purpose model optimized for roleplay, achieving impressive MT-Bench (8.14) and MMLU (64.9) scores.
Brief-details: SDXL Detector (86.8M params) - Fine-tuned image classifier for detecting SDXL-generated images with 97.3% F1 score. Built on umm-maybe detector.
Brief-details: Legal AI model (7.2B params) fine-tuned on Mistral-7B for detecting racial covenants in property deeds, achieving 99.7% F1 score
Brief-details: A 7.24B parameter Mistral-based model specialized in grammar correction and text rephrasing, optimized for GGUF format deployment with custom prompt templates
Brief-details: Chronos Hermes 13B GGUF - A 13B parameter LLM optimized for story writing and descriptive outputs, merging Chronos and Hermes models for enhanced narrative capabilities.
Brief-details: YOLOv8-based face detection model fine-tuned on 10k+ images, achieving robust facial detection capabilities with NVIDIA V100 GPU training. AGPL-3.0 licensed.
Brief Details: State-of-the-art Vietnamese-to-English translation model developed by VinAI Research, featuring mBART architecture and AGPL-3.0 license.
BRIEF DETAILS: A fine-tuned Donut-based LLM (202M params) that converts invoice/receipt images to structured JSON/XML without OCR, optimized for efficiency.
Brief Details: YOLOv8-based object detection model for web form UI elements. Achieves 0.52 mAP@0.95. Trained on 600 images. Detects form fields like name, email, buttons.
Brief-details: An innovative 7B parameter LLM fine-tuned for web search tasks, featuring extended context window and specialized web content processing capabilities
Brief-details: DISC-MedLLM is a Chinese medical domain LLM based on Baichuan-13B, specialized for healthcare conversations with 470k+ training examples and strong knowledge integration.