llama2-7b_KL_1e-05_forget01

llama2-7b_KL_1e-05_forget01

locuslab

A specialized variant of LLaMA 2 7B, fine-tuned with KL divergence parameter 1e-05 and forget rate 0.1, focused on controlled knowledge editing

PropertyValue
Base ModelLLaMA 2 7B
DeveloperLocusLab
Model URLHuggingFace/locuslab/llama2-7b_KL_1e-05_forget01

What is llama2-7b_KL_1e-05_forget01?

This is a specialized variant of the LLaMA 2 7B model that has been fine-tuned using a specific knowledge editing approach. The model implements a KL divergence parameter of 1e-05 and a forget rate of 0.1, aimed at controlled modification of the model's knowledge base while maintaining overall performance.

Implementation Details

The model builds upon the LLaMA 2 7B architecture, incorporating specific modifications for knowledge editing. The KL divergence parameter (1e-05) helps control the distance between the original and modified model distributions, while the forget rate (0.1) determines the extent of knowledge modification.

  • Fine-tuned version of LLaMA 2 7B
  • Implements controlled knowledge editing
  • Uses KL divergence constraint of 1e-05
  • Incorporates a forget rate of 0.1

Core Capabilities

  • Selective knowledge modification while preserving model integrity
  • Controlled forgetting of specific information
  • Maintains base LLaMA 2 capabilities
  • Balanced approach to knowledge editing

Frequently Asked Questions

Q: What makes this model unique?

This model's uniqueness lies in its carefully calibrated knowledge editing capabilities, using specific KL divergence and forget rate parameters to modify the model's knowledge while maintaining stability.

Q: What are the recommended use cases?

The model is particularly suited for applications requiring selective knowledge modification, research in controlled forgetting mechanisms, and experiments in model editing while maintaining core functionality.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026