Brief Details: A 6.74B parameter finance-focused LLM built on LLaMA-2-Chat, specialized through reading comprehension training achieving strong domain performance.
Brief details: A 30B parameter GPTQ-quantized LLaMA model optimized for uncensored storytelling and chain-of-thought reasoning, available in multiple compression formats
Brief Details: MLX-optimized Llama 2 7B chat model for Apple Silicon, converted to float16 format. Offers efficient text generation with Meta's renowned architecture.
Brief-details: A mirroring repository for Civitai models with comprehensive organization structure for Automatic1111 installations, featuring efficient download workflows and file management systems.
BRIEF DETAILS: A compact version of Stable Diffusion optimized for efficiency - offers similar image generation quality at half the size, with 4x GPU and 12x CPU speedup.
Brief-details: An anime-focused image generation model optimized for cute, kawaii-style artwork with specific parameters (DDIM/DPM++ SDE Karras sampler, 20+ steps, Clipskip 2) based on 7th_Layer.
Brief-details: MobileLLM-125M is a compact 125M parameter LLM optimized for on-device use, featuring grouped-query attention and shared embeddings for efficient processing.
Brief Details: Arcee-Agent: A 7.62B parameter LLM optimized for function calling and tool use, based on Qwen2. Supports multiple languages and excels in API integration and automation tasks.
Brief-details: A custom LoRA model for style transfer and character generation, featuring specific style tokens ("mksks style") and character tokens ("sksname gender") with non-commercial licensing.
Brief-details: Qwen2-Math-72B-Instruct is a specialized 72.7B parameter math-focused LLM optimized for complex mathematical reasoning and arithmetic problem-solving, supporting English interactions.
Brief-details: A 7B parameter instruction-tuned LLM optimized for Traditional Chinese, built on Mistral-7B with expanded vocabulary (62k tokens) and 8k context window. Shows strong bilingual capabilities.
Brief Details: 34B parameter LLM based on LLaMa-Yi architecture, achieving 74.18 average score on key benchmarks. Features UNA (Uniform Neural Alignment) technique and excels in MMLU tasks.
BRIEF-DETAILS: A Stable Diffusion model fine-tuned for generating low-poly style imagery, featuring unique geometric art style and 3D-like visuals.
Brief-details: MF-Base is an anime-focused text-to-image model trained on high-quality samples from Yande.re and Konachan, featuring advanced caustics and depth handling
BRIEF-DETAILS: A 1.13B parameter GPTQ-quantized Llama2 model, uncensored and fine-tuned on 40k chat discussions, offering multiple quantization options for efficient deployment.
BRIEF-DETAILS: Baichuan2-13B-Chat-4bits is a large-scale Chinese-English language model with 4-bit quantization, trained on 2.6T tokens with enhanced math and logic capabilities.
Brief-details: A fine-tuned version of FLAN-T5-base optimized for text summarization, achieving 47.2 ROUGE1 score on the SAMSum dataset with strong dialogue summarization capabilities.
Brief-details: Japanese text-to-speech model built on ESPnet framework, trained on amadeus dataset with VITS architecture, supporting high-quality voice synthesis.
BRIEF DETAILS: Llama-3.2V-11B-cot is a visual language model with 10.7B parameters, focused on step-by-step reasoning, achieving 63.5% average performance across visual benchmarks.
Brief-details: A Transformer-based Chinese text-to-speech model trained on Common Voice v7 and CSS10, featuring a single female voice speaker with high-quality synthesis capabilities.
Brief-details: A powerful 12B parameter conversational AI model built on Mistral-Nemo, featuring enhanced instruction following, coding capabilities, and function calling with 128K context window