r1-1776-distill-llama-70b-abliterated-4.5bpw-exl2

Maintained By
matatonic

r1-1776-distill-llama-70b-abliterated-4.5bpw-exl2

PropertyValue
Base ModelLLaMA 70B
Authormatatonic
Optimization4.5 bits per weight
RepositoryHugging Face

What is r1-1776-distill-llama-70b-abliterated-4.5bpw-exl2?

This model is an uncensored version of the perplexity-ai/r1-1776-distill-llama-70b, modified using the abliteration technique. It represents a proof-of-concept implementation for removing refusal behaviors from large language models without utilizing TransformerLens technology.

Implementation Details

The model leverages the advbench dataset and implements a unique approach to behavior modification in large language models. It's specifically optimized for 4.5 bits per weight (bpw) and is compatible with the Ollama framework for easy deployment.

  • Built on LLaMA 70B architecture
  • Implements abliteration technique for behavior modification
  • Optimized for efficient deployment at 4.5bpw
  • Direct integration with Ollama platform

Core Capabilities

  • Uncensored text generation
  • Modified response patterns compared to base model
  • Efficient memory usage through weight optimization
  • Compatible with Ollama deployment framework

Frequently Asked Questions

Q: What makes this model unique?

This model's uniqueness lies in its implementation of the abliteration technique for removing refusal behaviors, combined with efficient 4.5bpw optimization, making it both functionally distinct and computationally efficient.

Q: What are the recommended use cases?

The model is primarily designed for research purposes and exploration of language model behavior modification. It's particularly suitable for applications requiring modified response patterns and efficient deployment through the Ollama framework.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.