Brief Details: Specialized time-series forecasting model fine-tuned on 25M data points, optimized for intermittent demand prediction with WQL score of 0.5908
BRIEF-DETAILS: Quantized version of Mistral-Small-24B featuring multiple GGUF variants optimized for different size/performance trade-offs, ranging from 5.4GB to 19.4GB
Brief-details: PLaMo-13B-Instruct is a Japanese-English bilingual LLM with 13B parameters, 8192 context length, and instruction-tuning optimized for task completion.
Brief-details: ELECTRA-Small model trained on Nordic languages (Icelandic, Norwegian, Swedish, Danish) with 14.82B tokens and 96K vocabulary, optimized for Nordic NLP tasks
Brief-details: Experimental tiny random model based on MiniCPM3 architecture, created by katuni4ka. Part of a series of tiny random model experiments for research purposes.
BRIEF-DETAILS: State-of-the-art depth estimation model trained on 62M images using DPT architecture with DINOv2 backbone, offering zero-shot depth estimation capabilities
Brief-details: Image generation model focused on specific photographic styles, using 'pfbk' trigger word. Specialized in generating interior portrait scenes with specific lighting conditions.
BRIEF DETAILS: CLIP ConvNext Base model trained on LAION-400M dataset with 13B samples and 51K batch size, optimized for vision-language tasks
Brief-details: A tiny random model by katuni4ka focused on lightweight neural processing, featuring snowflake-inspired architecture patterns and efficient computation design.
Brief-details: Surya Layout2 is a specialized layout analysis model by vikp, focused on document structure understanding and layout parsing. Available on HuggingFace.
Brief-details: French question-answering model built on CamemBERT, fine-tuned on PIAF, FQuAD, and SQuAD-FR datasets, achieving ~80% F1 score for French QA tasks.
BRIEF-DETAILS: A tiny random initialization of BioGPT for causal language modeling, designed for biomedical text generation and research purposes
Brief Details: Quantized version of Microsoft's phi-4 model (14.7B parameters) optimized for Japanese text using GPTQ Int8 quantization, maintaining 99.9% of original performance.
BRIEF-DETAILS: 14B parameter uncensored LLM based on DeepSeek/Qwen with multiple GGUF quantizations (4.5GB-56GB). Optimized for efficient inference and RAM usage.
Brief-details: ComfyUI-compatible model using NF4 quantization for efficient deployment, featuring specialized checkpoints with 'bnb' in filenames for optimized performance.
Brief Details: A powerful 7B parameter embedding model built on Qwen1.5, featuring 4096-dim embeddings, 32k context window, and SOTA performance on MTEB/C-MTEB benchmarks.
Brief Details: CONCH - Academic AI model from MahmoodLab with strict non-commercial licensing, requiring institutional verification and individual registration for access.
Brief Details: CounterfeitXL - Advanced image generation model with multiple negative embedding options (Standard/Realistic/Anime). Available on Civitai and Hugging Face.
BRIEF-DETAILS: Google's 27B parameter instruction-tuned Gemma model requiring Hugging Face license acceptance - part of the Gemma family of models
Brief-details: Palmyra-Med-70B-32K is a specialized 70B parameter medical language model by Writer, featuring extended 32K context window for healthcare applications.
BRIEF-DETAILS: Spanish NER model built on DistilBERT base, uncased. Specialized for Named Entity Recognition in Spanish text. Lightweight and efficient.