NeuralDaredevil-8B-abliterated-GGUF
Property | Value |
---|---|
Model Size | 8B parameters |
Model Type | DPO fine-tuned LLM |
Quantization | GGUF format via llama.cpp |
Source Model | mlabonne/NeuralDaredevil-8B-abliterated |
Model URL | QuantFactory/NeuralDaredevil-8B-abliterated-GGUF |
What is NeuralDaredevil-8B-abliterated-GGUF?
NeuralDaredevil-8B-abliterated-GGUF is a quantized version of the high-performing uncensored language model that achieved top scores on the Open LLM Leaderboard. It's built through DPO fine-tuning of the Daredevil-8-abliterated base model, trained on the orpo-dpo-mix-40k dataset for one epoch. The model successfully maintains performance while offering improved efficiency through quantization.
Implementation Details
The model implements Direct Preference Optimization (DPO) fine-tuning techniques to recover performance losses from the abliteration process. The GGUF quantization allows for efficient deployment while maintaining model capabilities.
- Achieves 55.87 average score across benchmark tests
- Optimized for LM Studio deployment using "Llama 3" preset
- Outperforms base instruct models in comprehensive testing
Core Capabilities
- Excels in unaligned tasks and role-playing scenarios
- Strong performance in MMLU evaluations
- Balanced scores across AGIEval (43.73), GPT4All (73.6), and TruthfulQA (59.36)
- Maintains high performance in Bigbench (46.8)
Frequently Asked Questions
Q: What makes this model unique?
This model stands out as the best-performing uncensored 8B model on the Open LLM Leaderboard, particularly excelling in MMLU scores. Its DPO fine-tuning successfully preserves performance while offering efficient deployment through quantization.
Q: What are the recommended use cases?
The model is specifically optimized for applications that don't require alignment, making it particularly suitable for role-playing and creative tasks. It's best deployed using LM Studio with the "Llama 3" preset for optimal performance.