Mistral-Gutenberg-Doppel-7B-FFT-GGUF

Mistral-Gutenberg-Doppel-7B-FFT-GGUF

QuantFactory

A 7B parameter GGUF model based on Mistral-7B-Instruct-v0.2, fine-tuned on Gutenberg datasets using ORPO technique for enhanced literary capabilities.

PropertyValue
Parameter Count7.24B
LicenseApache-2.0
Base ModelMistral-7B-Instruct-v0.2
FormatGGUF

What is Mistral-Gutenberg-Doppel-7B-FFT-GGUF?

This is a quantized version of the Mistral-Gutenberg-Doppel model, specifically optimized using llama.cpp. The model represents a full finetune of Mistral-7B-Instruct-v0.2, trained on comprehensive literary datasets from Project Gutenberg.

Implementation Details

The model underwent ORPO (Optimal Reward Policy Optimization) training using 4x A100 GPUs for 2 epochs. It incorporates data from two key datasets: jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo, making it particularly well-suited for literary and textual applications.

  • Quantized architecture optimized for efficiency
  • Full finetune implementation rather than QLoRA
  • Trained using ORPO methodology

Core Capabilities

  • Enhanced literary text generation
  • Improved narrative understanding
  • Optimized for conversational applications
  • Efficient deployment through GGUF format

Frequently Asked Questions

Q: What makes this model unique?

This model combines the powerful Mistral-7B architecture with specialized training on literary texts, using full finetuning rather than parameter-efficient methods. The GGUF format makes it particularly suitable for deployment in resource-constrained environments.

Q: What are the recommended use cases?

The model is particularly well-suited for literary applications, text generation, and conversational tasks that require understanding of narrative structure and literary style. It's optimized for both performance and efficiency through its GGUF format.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026