Llama-3.1-8B-ArliAI-RPMax-v1.2
Property | Value |
---|---|
Parameter Count | 8.03B |
Context Length | 128K |
License | llama3.1 |
Tensor Type | BF16 |
Training Duration | 1 day on 2x3090Ti |
What is Llama-3.1-8B-ArliAI-RPMax-v1.2?
Llama-3.1-8B-ArliAI-RPMax-v1.2 is a specialized variant of Meta's LLaMA-3.1 model, specifically optimized for creative writing and roleplaying applications. This model represents part of the RPMax series, which focuses on providing diverse, non-repetitive outputs through carefully curated training data.
Implementation Details
The model utilizes a LORA training approach with 64-rank and 128-alpha parameters, resulting in approximately 2% trainable weights. Training was conducted over a single epoch to minimize repetition sickness, using a low learning rate of 0.00001 and gradient accumulation of 32 for optimal learning outcomes.
- Sequence length of 8192 tokens
- Available in multiple formats including FP16 and GGUF
- Implements Llama 3 Instruct Format for prompting
- Trained on deduplicated creative writing datasets
Core Capabilities
- High-quality creative writing and roleplay responses
- Non-repetitive character generation
- Extended context handling up to 128K tokens
- Flexible personality adaptation
- Improved scenario understanding
Frequently Asked Questions
Q: What makes this model unique?
The model's uniqueness lies in its training approach, which emphasizes dataset deduplication and variety in character interactions. Users have reported that it produces distinctly different outputs compared to other RP models, avoiding common issues of personality repetition.
Q: What are the recommended use cases?
This model is specifically designed for creative writing, roleplay scenarios, and character-based interactions. It excels in generating diverse and contextually appropriate responses while maintaining consistent character personalities.