Brief Details: A powerful 35B parameter multilingual LLM based on Cohere's architecture, featuring 128K context, trained on 30M+ dialogue entries with enhanced fact-based capabilities.
Brief Details: Japanese-optimized 7B parameter LLM based on Mistral, achieving top Japanese language performance with 3.8 ELYZA-TASK100 score. Specialized for AI assistant and VTuber applications.
Brief Details: A powerful 70B parameter LLM fine-tuned using Tenyx's DPO approach, achieving state-of-the-art 8.15 MT-Bench score among open-source models.
Brief-details: Yi-6B-Chat is a 6B parameter bilingual LLM optimized for conversation, featuring advanced language understanding and efficient 4-bit/8-bit quantized versions.
Brief Details: DeBERTa-v3 model fine-tuned for prompt injection detection with 184M parameters, achieving 99.99% accuracy. Built for security.
Brief Details: DynamiCrafter_pruned is an advanced video diffusion model that generates 2-second looping videos from still images and text prompts at 320x512 resolution.
Brief-details: German speech recognition model based on Whisper Large v3 with 1.54B parameters, achieving 3% WER and 0.81% CER on Common Voice German dataset.
BRIEF DETAILS: Storyboard-sketch is a LORA model for SDXL that transforms prompts into grayscale storyboard sketches with adjustable detail levels through strength settings.
Brief Details: Inkbot-13B is a conversational AI model focused on structured prompts, excelling in RAG queries with 8k context window and specialized task handling.
Brief Details: A 6.74B parameter Llama-2 based model optimized for fiction writing and conversations, featuring instruction-tuning and multi-role prompting capabilities.
Brief Details: NuNER-v0.1 is a RoBERTa-based token classification model fine-tuned for entity recognition, offering improved embeddings for NER tasks in English.
BRIEF-DETAILS: Samantha-1.11-70b is a 70B parameter LLM trained on Llama-2, focused on being an AI companion with emphasis on philosophy, psychology, and relationships.
Brief-details: Japanese-centric multilingual GPT-NeoX model with 10B parameters, trained on 600B tokens. Excels in Japanese language tasks with strong JGLUE benchmark performance.
Brief-details: Llama-2-13B-fp16 is Meta's 13B parameter LLM optimized for fp16 precision, featuring strong performance across reasoning, knowledge, and comprehension tasks with 2T training tokens
Brief-details: Collection of ControlNet v1.1 models optimized for Apple Core ML, enabling advanced image generation control on Apple devices with Stable Diffusion v1.5 compatibility.
BRIEF DETAILS: SuperHOT prototype 2: NSFW-focused LoRA with 8K context window, trained on 13B base model. Features custom positional encoding and no RLHF.
Brief-details: Guanaco-33B GGML is a high-performance language model optimized for CPU/GPU inference, offering various quantization options from 2-bit to 8-bit precision
Brief-details: CodeT5+ 16B is a powerful encoder-decoder LLM specialized for code tasks, supporting multiple languages with state-of-the-art performance in code generation and understanding.
Brief-details: FuzzyHazel is a specialized text-to-image model built on Stable Diffusion, known for its unique fuzzy artistic style and efficient image generation with 28-step sampling capability.
Brief-details: OpenAssistant's 30B parameter LLaMA-based model, fine-tuned using supervised learning. Requires XOR decoding process with original LLaMA weights for implementation.
Brief-details: Promptist is a Microsoft-developed reinforcement learning model that optimizes text prompts for Stable Diffusion v1.4, enhancing text-to-image generation through automated prompt refinement.