Llama-3.1-8B-ArliAI-RPMax-v1.2
Property | Value |
---|---|
Parameter Count | 8.03B |
Context Length | 128K tokens |
License | LLaMA 3.1 |
Training Duration | 1 day on 2x3090Ti |
Format | BF16 |
What is Llama-3.1-8B-ArliAI-RPMax-v1.2?
Llama-3.1-8B-ArliAI-RPMax-v1.2 is a specialized variant of Meta's LLaMA 3.1 model, specifically fine-tuned for creative writing and roleplaying applications. This model represents version 1.2 of the RPMax series, featuring enhanced deduplication and improved filtering of irrelevant content.
Implementation Details
The model employs a LORA training approach with 64-rank and 128-alpha parameters, resulting in approximately 2% trainable weights. Training was conducted over a single epoch to minimize repetition sickness, using a very low gradient accumulation of 32 for optimal learning outcomes.
- Sequence Length: 8192 tokens
- Learning Rate: 0.00001
- Available in multiple formats including FP16 and GGUF
- Supports the LLaMA 3 Instruct Format for prompting
Core Capabilities
- High creativity and non-repetitive outputs
- Diverse character and situation handling
- Enhanced contextual understanding for roleplay scenarios
- 128K context window for extended conversations
Frequently Asked Questions
Q: What makes this model unique?
This model stands out due to its specialized training on carefully curated creative writing and RP datasets, with particular attention to avoiding character and situation repetition. Users report that it exhibits a distinct style compared to other RP models.
Q: What are the recommended use cases?
The model is optimized for creative writing, roleplaying, and character-based interactions. It excels in scenarios requiring diverse character portrayals and creative storytelling.