Brief Details: A specialized LoRA model for FLUX.1-dev that generates children's simple sketches with pastel colors, trained for creating innocent, sketch-style artwork.
Brief-details: Quantized version of Gemma-2-9B with multiple GGUF variants optimized for different hardware setups. Features 9.24B parameters with comprehensive quantization options from 3.43GB to 36.97GB.
Brief Details: Advanced image captioning model based on Florence-2, optimized for AI art prompting. Features 823M params, multiple caption styles, and low VRAM usage.
BRIEF DETAILS: Specialized 8B parameter healthcare LLM built on Llama 3, achieving SOTA results for its size. Features advanced medical QA capabilities and ethical safeguards.
Brief-details: A 12.9B parameter MoE model based on Mixtral architecture, optimized with DPO and laser techniques. Strong performance on reasoning tasks with 67.16% avg on OpenLLM.
Brief-details: A powerful 47B parameter Traditional Chinese-focused instruction-tuned LLM built on Mixtral-8x7B, featuring expanded vocabulary and achieving GPT-3.5-turbo level performance.
Brief-details: A 7B parameter Mamba architecture model trained on RefinedWeb dataset, featuring linear-time sequence modeling and strong performance across NLP tasks
Brief-details: Advanced multimodal AI model combining Mistral-7B with CLIP-ViT-L for image understanding, featuring dual-encoder architecture and Russian language support.
BRIEF DETAILS: TinyLLaVA is a 1.41B parameter multimodal model that efficiently handles image-text tasks, achieving competitive performance against larger 7B models with significantly fewer parameters.
BRIEF DETAILS: UltraRM-13b is a SOTA reward model built on LLaMA2-13B, achieving 92.30% win rate vs text-davinci-003 on AlpacaEval benchmark.
Brief-details: A 34B parameter Yi-based model fine-tuned on light novels and roleplay data, optimized for creative writing and character interactions using GGUF format
TinyLlama-1.1B-Chat: Compact 1.1B parameter chat model based on Llama 2 architecture, trained on 3T tokens. Optimized for efficient deployment with Apache 2.0 license. Supports text generation tasks with minimal computational requirements.
Brief Details: An anime-styled variant of SSD-1B, merged with NekorayXL and fine-tuned through distillation. Supports text-to-image generation with specialized anime aesthetics.
Brief-details: A 1B parameter code generation model fine-tuned on the evol-codealpaca dataset, achieving 39% pass@1 on HumanEval and 31.74% on MBPP.
Brief Details: A 13B parameter GPTQ-quantized LLaMA2-based model merging Pygmalion-2 and MythoMax, optimized for roleplay and chat with multiple quantization options
Brief Details: Japanese vision-language model for image captioning and VQA tasks. Built on InstructBLIP architecture with Japanese StableLM, trained on CC12M and COCO datasets.
Brief-details: WizardMath-7B-V1.0 is a specialized mathematical reasoning LLM achieving 54.9% on GSM8k and 10.7% on MATH benchmarks, built on Llama 2 architecture.
Brief Details: 13B parameter LLaMA2-based model optimized for 8K context, GGML quantized for CPU/GPU inference, trained on Orca chat dataset for instruction following.
BRIEF DETAILS: Text-to-image diffusion model with built-in VAE, optimized for realistic image generation with specific focus on quality control and anatomical accuracy.
Brief-details: Multi-stage blend model combining 15+ AI models for stable diffusion, optimized for high-quality anime-style image generation with advanced weight calibration
Brief Details: Korean language AI model with 13.1B parameters, fine-tuned on KoAlpaca Dataset v1.1b, optimized for text generation and multilingual tasks