Brief-details: Fine-tuned SD1.5 model using Direct Preference Optimization (DPO) for enhanced text-to-image generation based on human preferences, trained on pickapic_v2 dataset.
Brief-details: A fine-tuned 70B parameter LLaMA2 model optimized through two-stage training, featuring chatml format compatibility and multi-lingual capabilities.
BRIEF-DETAILS: MPT-30B-Chat GGML is a powerful 30B parameter chat model optimized for CPU/GPU inference, featuring 8K context length and various quantization options.
Brief-details: H2O.ai's 7B parameter language model based on Falcon, fine-tuned on OpenAssistant dataset. Optimized for text generation with strong instruction-following capabilities.
Brief-details: A 10B parameter Japanese-centric multilingual GPT-NeoX model with instruction fine-tuning, achieving 59.11% accuracy on JGLUE 8-task benchmark.
Brief-details: WizardLM-13B-V1.0 is a powerful language model built on Llama architecture, featuring strong performance on MT-Bench (6.35) and HumanEval (24.0 pass@1).
Brief Details: A powerful image tagging model with 96.2M params, trained on Danbooru dataset. Supports ratings, characters, and general tags with F1 score of 0.6854.
Brief-details: A Hololive-focused voice conversion model using so-vits-svc 4.0, created by megaaziib for music generation with Hololive character voices.
Brief-details: A specialized text-to-image diffusion model focused on generating popup book-style illustrations, offering a unique artistic style for creative projects.
BRIEF-DETAILS: A 406M parameter BART-based model for knowledge generation and linking, achieving SOTA results in relation extraction with 70.74% F1 score
BRIEF DETAILS: Dreambooth-trained Stable Diffusion model that generates images in Hergé's distinctive art style (known for Tintin comics). MIT licensed with 73 likes and 83 downloads.
BRIEF-DETAILS: Qwen2-Audio-7B-GGUF is a state-of-the-art 7.75B parameter audio-language model supporting voice interactions and audio analysis, optimized for local deployment using GGUF quantization.
Brief-details: Artistic style model based on hitokomoru's distinctive anime artwork, trained using Waifu Diffusion. MIT licensed, popular with 73 likes.
Brief Details: A 12.2B parameter Mistral-based versatile model optimized for creative writing and roleplay, featuring multi-chat template support and BF16 precision.
BRIEF-DETAILS: Experimental English spelling correction model (139M params) using BART architecture. Fixes typos and punctuation with MIT license.
Brief Details: OCRonos-Vintage is a 124M parameter GPT-2 based model specialized in OCR correction for historical texts (pre-1955), trained on cultural heritage archives.
Brief-details: A powerful 1T parameter language model based on Llama 3.1, optimized for creative writing tasks through innovative self-merge architecture using mergekit.
BRIEF DETAILS: Zamba2-2.7B: A hybrid 2.69B parameter model combining state-space and transformer blocks, offering high performance and low latency for text generation.
Brief-details: Korean-specialized 8B parameter LLM based on Llama-3, enhanced with 17.5K Korean tokens and trained on 100GB of Korean text data for improved Korean language capabilities.
Brief Details: MusePose - An advanced image-to-video generation framework for creating human motion videos, featuring pose-driven animation and high-quality dance sequence generation.
BRIEF DETAILS: Extended-context Llama 3 (8B) model with 64k context window using PoSE technique. Built for enhanced long-range text understanding and generation.