Brief-details: GGML quantized version of Meta's Llama-2-7B model, offering various quantization levels from 2-bit to 8-bit for efficient CPU/GPU inference
Brief-details: A 70B parameter LLaMA-based model with strong performance on reasoning tasks, achieving 75.49% on MMLU and notable results in truthfulness evaluations.
Brief Details: A lightweight question-answering model with 65.2M parameters, achieving 86.9% F1 score on SQuAD. Distilled from BERT, 40% smaller, 60% faster.
Brief-details: A 34B parameter Yi-based model focused on roleplay and NSFW content, fine-tuned on light novels and curated datasets using QLoRA. Part of a planned "Seven Deadly Sins" series.
Brief-details: A 13B parameter GPTQ-quantized language model based on Vicuna, optimized for conversational AI with 4-bit precision and comprehensive RLHF training.
Brief-details: Tencent's Hunyuan3D-1 is a powerful text/image-to-3D generation model using a two-stage approach, featuring fast generation (11-25s) and high-quality outputs.
Brief-details: A versatile text-to-image model merging multiple Stable Diffusion checkpoints, optimized for anime-style art with enhanced background generation and color handling.
BRIEF-DETAILS: LLaVA-13b is a multimodal chatbot combining LLaMA with visual capabilities, trained on 595K image-text pairs and 150K instructions for research purposes.
Brief Details: OpenChat-8192 is an enhanced LLaMA-13B variant with 8192 context length, achieving 106.6% ChatGPT-level performance using minimal training data.
Brief Details: A powerful ControlNet model for SDXL that transforms sketches into high-quality images, supporting multiple line types and widths with MidJourney-comparable results.
BRIEF DETAILS: Advanced 7B parameter model based on Mistral, optimized for function calling and JSON outputs. Features ChatML format and achieves 90% accuracy in function calling evaluations.
BRIEF DETAILS: A text-to-image diffusion model using Direct Preference Optimization (DPO), fine-tuned from SDXL base 1.0 for enhanced image generation aligned with human preferences.
Brief-details: A 13B parameter LLM merging Platypus2 and OpenOrca models, achieving 59.5% on MMLU and excelling in STEM/logic tasks. Strong performance in reasoning and knowledge-based evaluations.
Brief Details: DeciLM-7B: Top-performing 7B parameter LLM with 8K token context, variable GQA attention, and 4.4x faster throughput than Mistral-7B
Brief-details: A specialized LoRA model for SDXL that transforms prompts into IKEA-style instruction manual illustrations, ideal for step-by-step visual guides
Brief Details: WizardLM-13B-V1.2 is an advanced LLM based on Llama-2 13B, achieving strong performance on MT-Bench (7.06) and AlpacaEval (89.17%), specialized in following complex instructions.
BRIEF-DETAILS: DeepSeek-VL-7B is an open-source vision-language model with 7.34B parameters, capable of processing complex visual inputs including diagrams, web pages, and scientific content.
Brief-details: A powerful 118B parameter language model created by merging two fine-tuned Llama-2 70B models (Xwin and Euryale), optimized for conversational AI tasks with multiple quantization options.
BRIEF-DETAILS: A highly realistic text-to-speech model featuring multi-voice capabilities, advanced prosody, and unique voice customization through reference clips
Brief-details: High-performance GGUF quantized version of OpenHermes 2.5 Mistral-7B, optimized for local deployment with strong code and general task capabilities, featuring ChatML format support
Brief-details: Complex-Lineart is a specialized text-to-image model trained on 100 images at 768x768 resolution, designed for creating detailed line art with complex cyberpunk and mechanical themes.