BRIEF-DETAILS: Manticore-13B-GGML is a quantized version of the Manticore 13B model, offering various compression levels (2-8 bit) for efficient CPU/GPU inference with llama.cpp compatibility.
Brief Details: 7B parameter chat model fine-tuned from Pythia, optimized for dialogue tasks with 40M+ instructions. Runs on 12GB GPU with int8 quantization.
Brief-details: Instruction-tuned Stable Diffusion model specialized in cartoonization, built on SD v1.5 and InstructPix2Pix, offering prompt-based image transformation.
Brief Details: Pygmalion-1.3B is a 1.52B parameter dialogue model fine-tuned from pythia-1.3b-deduped, optimized for conversational AI with 56MB dialogue training data.
Brief Details: Multimodal Speech LLM combining Llama 3.1-8B and Whisper-large-v3-turbo for speech/text processing, supporting 15 languages with 50.3M parameters.
Brief Details: Multi-label topic classifier for tweets, based on TimeLMs, handles 19 categories. Built on 124M tweets dataset with MIT license.
Brief-details: A specialized image generation model focused on cinematic aesthetics, creating images with bokeh effects, depth of field, and movie-like lighting. Features WTFPL license and 1000 training steps.
Brief-details: Lumimaid-v0.2-8B is an 8B parameter LLM based on Meta-Llama-3.1, trained on diverse datasets with NSFW capabilities and conversational abilities
Brief-details: Holodayo XL 2.1 is an advanced anime-style text-to-image model built on Animagine XL V3, optimized for Virtual Youtuber artwork generation with improved anatomy and quality.
Brief Details: Windows-based application for creating dynamic portraits with human and animal modes, featuring automated updates and precise portrait control capabilities.
Brief Details: 11B parameter GGUF model optimized for creative writing and storytelling. Features improved vocabulary and diverse genre capabilities with focus on long-form generation.
Brief-details: Qwen2-Audio-7B is an advanced 8.4B parameter audio-language model capable of voice chat and audio analysis, supporting English audio-to-text tasks with BF16 precision.
Brief-details: Qwen1.5-14B-Chat-GGUF is a powerful 14.2B parameter chat model featuring GGUF quantization, 32K context length support, and improved multilingual capabilities.
Brief Details: A compressed 3.82B parameter GGUF version of Phi-3-mini with extended 128k context, optimized for efficient inference and deployment using PrunaAI's compression techniques.
BRIEF-DETAILS: 7B parameter Mistral-based model fine-tuned with DPO, achieving strong performance on multi-turn tasks and benchmarks like MTBench. Built with Distilabel technology.
Brief-details: A personal development blog and model documentation by Sao10K, featuring insights into AI model development, EMT experiences, and life updates
Brief Details: Lily-Cybersecurity-7B is a specialized 7.24B parameter Mistral-based model fine-tuned on 22,000 cybersecurity scenarios with comprehensive security expertise.
BRIEF DETAILS: 6.7B parameter LLaMA-based language model focused on open-source transparency, offering full training logs and 360 checkpoints for research and development.
BRIEF DETAILS: Intel's 7B parameter LLM fine-tuned from Mistral-7B, optimized for general language tasks with strong performance on benchmark tests like ARC and TruthfulQA.
Brief-details: A powerful 3.46B parameter stereo music generation model capable of creating high-quality music from text descriptions with advanced stereophonic capabilities.
Brief-details: A powerful Chinese text embedding model trained on 400M text pairs, offering strong performance across classification, clustering and retrieval tasks with 1024-dimensional embeddings.