lzlv_70b_fp16_hf

lzlv_70b_fp16_hf

lizpreciatior

70B parameter LLM merge combining Nous-Hermes, Xwin-LM, and Mythospice models. Optimized for creative roleplay while maintaining instruction-following capabilities.

PropertyValue
Parameter Count69B
LicenseCC-BY-NC-2.0
Model TypeText Generation
ArchitectureLLaMA2-based merge
Tensor TypeF32/BF16

What is lzlv_70b_fp16_hf?

lzlv_70b_fp16_hf is a sophisticated multi-model merge combining three powerful 70B parameter language models: Nous-Hermes-Llama2-70b, Xwin-LM-7B-V0.1, and Doctor-Shotgun/Mythospice-70b. The model aims to balance creative capabilities with robust instruction-following, making it particularly suitable for roleplay and creative tasks.

Implementation Details

The model employs a complex merging strategy using SLERP gradients. The process involves two primary components: first merging Mythospice with Xwin using gradients [0.25, 0.3, 0.5], then merging Xwin with Hermes using gradients [0.4, 0.3, 0.25]. These components are finally combined using a 0.5 SLERP weight.

  • Uses Vicuna prompt format
  • Supports both F32 and BF16 tensor types
  • Quantized versions available through TheBloke's GGUF conversions

Core Capabilities

  • Enhanced creative text generation
  • Strong instruction-following abilities inherited from Xwin-LM
  • Balanced performance in complex scenarios
  • Improved narrative and roleplay capabilities

Frequently Asked Questions

Q: What makes this model unique?

The model's unique strength lies in its careful balance of creative capabilities and instruction-following abilities, achieved through a sophisticated merging process of three distinct 70B models.

Q: What are the recommended use cases?

The model excels in creative writing, roleplaying scenarios, and complex narrative tasks while maintaining the ability to follow detailed instructions effectively.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026