Lamarck-14B-v0.7-Fusion

Maintained By
sometimesanotion

Lamarck-14B-v0.7-Fusion

PropertyValue
Model Size14B parameters
Authorsometimesanotion
Model URLHugging Face
Architecture TypeFusion Merge Model

What is Lamarck-14B-v0.7-Fusion?

Lamarck-14B-v0.7-Fusion is an experimental language model that implements a novel multi-stage merge approach using the arcee_fusion method. The model combines various iterations of Lamarck and Qwenvergence models through a sophisticated four-step merge process, with three steps utilizing fusion techniques.

Implementation Details

The model architecture involves a complex merger of multiple models including Lamarck-14B-v0.7, Lamarckvergence, and Qwenvergence-14B-v12-Prose-DS. It uses bfloat16 precision and implements specific parameter configurations including int8_mask and normalization techniques.

  • Multi-stage fusion process with SLERP merging
  • Integration with Chocolatine-2-14B-Instruct-v2.0.3
  • Enhanced emphasis on Qwenvergence-14B-v12-Prose-DS in later layers
  • Specialized layer configurations for attention and MLP components

Core Capabilities

  • Strong GPQA (General Purpose Question Answering) performance
  • Enhanced reasoning capabilities
  • Superior prose generation
  • Optimized for free-form creative tasks
  • Moderate IFEVAL performance

Frequently Asked Questions

Q: What makes this model unique?

The model's distinctive feature is its innovative use of the arcee_fusion merge method in a multi-stage process, combining the strengths of multiple parent models while maintaining high-quality prose generation capabilities.

Q: What are the recommended use cases?

This model is particularly well-suited for free-form creative tasks, general-purpose question answering, and applications requiring strong reasoning capabilities. It excels in scenarios where high-quality prose generation is essential.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.