Brief-details: Baka-Diffusion is a latent diffusion model optimized for anime-style image generation, featuring U-Net Block merges and Danbooru tagging system compatibility with CFG 3-9 range.
Brief Details: CoreML-optimized ChilloutMix model for Apple Silicon, specialized in realistic image generation. Features Neural Engine compatibility and VAE integration.
BRIEF DETAILS: OPT-6.7B-Erebus is an adult-content focused language model based on Meta's OPT architecture, trained on curated NSFW datasets with 6.7B parameters.
Brief Details: A fine-tuned latent text-to-image diffusion model specialized in generating high-quality Artstation-style images with excellent aspect ratio handling
Brief Details: RoBERTa-based Chinese QA model fine-tuned on CMRC2018, WebQA, and Laisi datasets. Specializes in extractive question answering with 94% accuracy.
Brief-details: T-lite-instruct-0.1: 8B-parameter Russian language instruction-tuned model with strong MT-Bench and Arena benchmark performance, optimized for fine-tuning.
Brief-details: Hermes 3 is a 70B parameter LLM built on Llama-3.1, featuring advanced agentic capabilities, improved roleplaying, reasoning, and function calling abilities.
Brief-details: Starling-LM-7B-alpha is a powerful 7B parameter LLM fine-tuned with RLAIF, achieving impressive 8.09 MT-Bench scores, outperforming most models except GPT-4.
Brief-details: CLIP Vision-Language model that maps images and text to shared vector space. 63.3% ImageNet accuracy. Supports zero-shot classification and image search.
Brief-details: Florence-2-base-ft is a 0.23B parameter vision foundation model capable of multiple vision-language tasks including captioning, object detection, and OCR.
Brief-details: A 70B parameter LLaMA-3 variant that uses orthogonalization to reduce refusal behaviors, based on weight manipulation research. BF16 format.
Brief Details: A fine-tuned TTS model (647M params) offering high-quality speech generation with emotion control and consistent voices. Built on Parler-TTS Mini v0.1.
Brief-details: GTE-large-zh is a powerful Chinese text embedding model (326M params) offering SOTA performance on CMTEB benchmark, specialized for semantic similarity and information retrieval tasks.
Brief Details: ELYZA-japanese-Llama-2-7b is a Japanese-enhanced LLaMA 2 model with 6.27B parameters, supporting both Japanese and English language tasks with extended pre-training.
Brief-details: Meta's 34B parameter instruction-tuned code generation model, optimized for GGUF format with multiple quantization options for efficient deployment
Brief Details: A LLaMAfied version of Qwen-7B-Chat optimized for LLaMA/LLaMA-2 architecture, supporting both English and Chinese, with MMLU score of 53.48 and CEval of 54.13.
Brief-details: A 13B parameter LLaMA-based conversational AI model fine-tuned with RLHF, optimized for instruction-following and dialogue tasks with high quality outputs.
Brief-details: XGen-7B-8K-Inst is a 7B parameter LLM by Salesforce trained on 8K sequence lengths, offering enhanced long-context processing capabilities with instruction tuning.
Brief Details: WizardLM-7B-HF is an instruction-following LLM using Evol-Instruct methodology, built on Llama 7B with float16 precision for efficient GPU inference.
Brief-details: Textual inversion embedding model combining Viking and cyberpunk aesthetics for SD 2.x, trained on 768x768 Midjourney images. Ideal for cool, futuristic Norse-themed art.
BRIEF DETAILS: A specialized LoRA model trained on 71 AI-generated images for creating toy-like 3D renders, built on FLUX.1-dev with unique styling capabilities and optimized for Euler/DEIS samplers