Llama-3.1-8B-ArliAI-RPMax-v1.2

Maintained By
ArliAI

Llama-3.1-8B-ArliAI-RPMax-v1.2

PropertyValue
Parameter Count8.03B
Context Length128K tokens
LicenseLLaMA 3.1
Training Duration1 day on 2x3090Ti
FormatBF16

What is Llama-3.1-8B-ArliAI-RPMax-v1.2?

Llama-3.1-8B-ArliAI-RPMax-v1.2 is a specialized variant of Meta's LLaMA 3.1 model, specifically fine-tuned for creative writing and roleplaying applications. This model represents version 1.2 of the RPMax series, featuring enhanced deduplication and improved filtering of irrelevant content.

Implementation Details

The model employs a LORA training approach with 64-rank and 128-alpha parameters, resulting in approximately 2% trainable weights. Training was conducted over a single epoch to minimize repetition sickness, using a very low gradient accumulation of 32 for optimal learning outcomes.

  • Sequence Length: 8192 tokens
  • Learning Rate: 0.00001
  • Available in multiple formats including FP16 and GGUF
  • Supports the LLaMA 3 Instruct Format for prompting

Core Capabilities

  • High creativity and non-repetitive outputs
  • Diverse character and situation handling
  • Enhanced contextual understanding for roleplay scenarios
  • 128K context window for extended conversations

Frequently Asked Questions

Q: What makes this model unique?

This model stands out due to its specialized training on carefully curated creative writing and RP datasets, with particular attention to avoiding character and situation repetition. Users report that it exhibits a distinct style compared to other RP models.

Q: What are the recommended use cases?

The model is optimized for creative writing, roleplaying, and character-based interactions. It excels in scenarios requiring diverse character portrayals and creative storytelling.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.