Brief-details: A powerful multimodal model series capable of processing text, images, and videos with 8k context window, available in sizes from 1B to 108B parameters. MIT licensed.
Brief-details: UniFormer is a powerful vision transformer that combines convolution and self-attention, achieving 86.3% top-1 accuracy on ImageNet-1K without extra training data.
Brief Details: A 12.2B parameter GGUF-quantized language model based on Mistral, optimized with Unsloth for 2x faster training and inference capabilities.
Brief Details: Freeze-Omni is an Apache-2.0 licensed model developed by VITA-MLLM, focusing on safe and ethical AI applications with comprehensive usage guidelines.
Brief-details: OLMo-2-1124-13B-DPO is a 13B parameter language model from Allen AI, trained with DPO on Tulu 3 dataset, optimized for diverse tasks including math and reasoning.
Brief-details: AI language model (13B params) from Allen AI, fine-tuned on Tülu 3 dataset. Optimized for diverse tasks including math and reasoning. Apache 2.0 licensed.
Brief-details: Open language model by AllenAI, 7B params, SFT-tuned on Tulu-3 dataset. Strong performance on diverse tasks, Apache 2.0 licensed. Part of OLMo 2 family.
Brief-details: OLMo-2-1124-7B-DPO is an open-source 7B parameter language model fine-tuned with DPO, optimized for instruction-following and diverse tasks like MATH and GSM8K.
Brief-details: A 7B parameter GGUF model based on Mistral-7B-Instruct-v0.2, fine-tuned on Gutenberg datasets using ORPO technique for enhanced literary capabilities.
Brief-Details: Advanced image variation model combining SigLIP vision features with Flux architecture for style transfer and image generation with controlable style constraints
Brief Details: 8B parameter merged LLM combining adventure, writing, and multilingual capabilities using Model Stock method. Built on LLaMA3 base with FP16 precision.
Brief Details: A powerful 123B parameter LLM based on Largestral 2411, featuring system prompt support and enhanced creative capabilities in v2.2 variant.
Brief Details: An anime-style LoRA model for FLUX.1-dev that specializes in generating anime-themed images with emphasis on scenic environments and character poses.
Brief-details: A 32B parameter coding-focused LLM optimized for MLX, quantized to 3-bit precision. Built on Qwen2.5 architecture with Apache 2.0 license and MLX framework support.
Brief-details: 7.24B parameter GGUF model optimized for efficiency with multiple quantization options, featuring imatrix compression and English language support.
BRIEF DETAILS: Quantized version of Stable Diffusion 3.5 Medium optimized for GGUF format. Offers multiple quantization levels (4-16 bit) for efficient text-to-image generation.
Brief Details: NoobAI XL V-Pred based merged model with pencil-style output capabilities, featuring open-source licensing and community-focused development approach.
Brief-details: A 7B parameter fine-tuned variant of VinaLLaMA specialized for mathematical tasks, utilizing PEFT for efficient adaptation and distributed as Safetensors format.
Brief-details: Insurance-focused 8B parameter LLM based on Llama 3, quantized for efficient deployment. Specialized in insurance policy analysis, claims processing, and coverage explanations.
Brief-details: 72B parameter Qwen2.5-based model with multiple quantization options (25GB-77GB), optimized for text generation and conversation, trained on 10 curated datasets
Brief-details: A 7.62B parameter merged LLM combining HomerAnvita-NerdMix and HomerCreative-Mix, achieving top ranking for sub-13B models with strong performance in instruction following (78.08% IFEval accuracy).