galactica-30B-evol-instruct-70K-GPTQ

Maintained By
TheBloke

Galactica 30B Evol Instruct GPTQ

PropertyValue
Parameter Count4.44B (Quantized)
Model TypeGPTQ-Quantized LLM
LicenseNon-commercial CC BY-NC 4.0
Training DataWizardLM Evol Instruct 70K
PaperWizardLM Paper

What is galactica-30B-evol-instruct-70K-GPTQ?

This model is a 4-bit quantized version of the Galactica 30B model that has been fine-tuned on the WizardLM Evol-Instruct dataset. The original Galactica model was trained on 106 billion tokens of scientific text, making it particularly strong for scientific and academic tasks. The Evol-Instruct fine-tuning enhances its ability to follow complex instructions.

Implementation Details

The model uses GPTQ quantization to reduce its size while maintaining performance. It was quantized without group size to minimize VRAM requirements and implements act-order optimization for improved inference accuracy.

  • 4-bit precision quantization
  • Optimized for both CUDA and Triton execution
  • Compatible with AutoGPTQ and text-generation-webui
  • Uses specific instruction format for optimal results

Core Capabilities

  • Scientific text generation and analysis
  • Complex instruction following
  • Mathematical and technical content generation
  • Research paper analysis and summarization
  • Scientific Q&A

Frequently Asked Questions

Q: What makes this model unique?

This model combines Galactica's scientific expertise with enhanced instruction-following capabilities from WizardLM's Evol-Instruct dataset, all while being optimized for efficient deployment through GPTQ quantization.

Q: What are the recommended use cases?

The model excels in scientific and academic applications, including research paper analysis, technical writing, mathematical explanations, and scientific problem-solving. It's particularly suitable for educational and research environments where scientific accuracy is crucial.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.