BRIEF DETAILS: A 9B parameter bilingual LLM fine-tuned from Yi-1.5-9B, optimized for roleplay conversations with 8K context window. Features strong MMLU (66.19%) and CMMLU (69.07%) performance.
Brief-details: HARDblend is a versatile photorealistic AI model specialized in high-quality portrait generation, combining multiple base models with NSFW capabilities.
Brief Details: WordLlama - A lightweight NLP toolkit (16MB) for text embedding, similarity matching, and ranking. Derived from LLMs, optimized for CPU.
BRIEF-DETAILS: Emu2: A 37B parameter multimodal model with strong in-context learning capabilities for text-image tasks, achieving SOTA on various benchmarks
Brief-details: A fine-tuned 8B parameter LLaMA 3.1 model optimized for instruction-following and coding, featuring uncensored capabilities and 128K context window.
Brief-details: Retro-style cartoon model combining anime and Northern European cartoon aesthetics, built on Stable Diffusion XL, optimized for artistic illustrations
Brief-details: A 7B parameter uncensored language model available in multiple GGML quantizations (2-8 bit), optimized for CPU+GPU inference with llama.cpp compatibility
Brief-details: Midjourney V6 port for Stable Diffusion - A powerful LoRA model trained on 100k+ Midjourney V6 images, offering high-quality text-to-image generation in English and Russian
Brief Details: 8B parameter LLaMA-3-based roleplay model, optimized for 1-on-1 interactions. Features GGUF quantization and IMatrix compression. Strong personality handling and NSFW capable.
Brief-details: Qwen-VL-Chat-Int4 is a 4-bit quantized visual language model capable of processing images and text, offering high performance with reduced memory footprint and faster inference speed.
Brief-details: UniNER-7B-all: Advanced named entity recognition model trained on ChatGPT-generated data and 40 supervised datasets, optimized for research applications.
Brief-details: A 72B parameter LLM based on Qwen1.5, finetuned for improved system prompt compliance and long conversations using open-source datasets. Features 32k context window and ChatML format.
BRIEF-DETAILS: Largest Chinese GPT2 model (3.5B parameters) trained on Wudao corpus, specialized in NLG tasks. Apache 2.0 licensed, supports text generation with advanced parameter controls.
Brief-details: Fine-tuned GPT-J model optimized for instruction following, runs on entry-level GPUs with fp16 precision, based on Stanford Alpaca dataset.
Brief Details: Large-scale pre-trained conversational AI model for goal-directed dialogs. Supports grounded responses and empathetic chat with 551M dialog training examples.
Brief Details: Falcon-7B model fine-tuned on CodeAlpaca 20K dataset using QLoRA method, specialized for code generation and instruction following tasks.
Brief Details: A 7.24B parameter language model based on Mistral architecture, designed for text generation with BF16 precision and Apache 2.0 license.
Brief-details: Sensei-7B-V1 is a specialized RAG-focused LLM based on Mistral-7B, fine-tuned for accurate search result processing and summary generation with 7.24B parameters.
Brief-details: StableBeluga2-70B-GPTQ is a quantized version of Stability AI's 70B parameter LLaMA2-based chat model, optimized for efficient deployment with multiple GPTQ variants for different hardware requirements.
Brief Details: A 34B parameter math-specialized LLM based on Code Llama, achieving state-of-the-art performance in mathematical reasoning tasks.
BRIEF DETAILS: Russian-language Mistral-7B chatbot with LoRA adaptation, trained on 5 conversational datasets. Optimized for Russian dialogue and instruction-following tasks.