text2cypher-gemma-2-9b-it-finetuned-2024v1

Maintained By
neo4j

text2cypher-gemma-2-9b-it-finetuned-2024v1

PropertyValue
Base Modelgoogle/gemma-2b-9b-it
LicenseApache 2.0
Training FrameworkPEFT 0.12.0
Primary TaskText to Cypher Generation

What is text2cypher-gemma-2-9b-it-finetuned-2024v1?

This is a specialized language model fine-tuned by Neo4j to convert natural language queries into Cypher database queries. Built on Google's Gemma 2-9b-it model, it represents a significant advancement in making graph database interactions more accessible through natural language processing.

Implementation Details

The model utilizes PEFT (Parameter Efficient Fine-Tuning) techniques with LoRA configuration (r=64, alpha=64) and is optimized for 4-bit quantization using BitsAndBytes. Training was conducted on an A100 PCIe GPU with specific hyperparameters including a learning rate of 2e-5 and batch size of 4.

  • Implements 4-bit quantization with double quantization
  • Uses bfloat16 compute dtype
  • Employs LoRA for efficient fine-tuning
  • Trained on the Neo4j-Text2Cypher(2024) Dataset

Core Capabilities

  • Natural language to Cypher query conversion
  • Understanding of graph database schema
  • Handling complex query patterns
  • Efficient processing with 4-bit quantization

Frequently Asked Questions

Q: What makes this model unique?

This model specifically targets the conversion of natural language to Cypher queries, making it highly specialized for graph database interactions. It combines the power of Gemma 2-9b with efficient fine-tuning techniques.

Q: What are the recommended use cases?

The model is ideal for developers and analysts who need to interact with Neo4j graph databases using natural language queries. It's particularly useful in applications requiring natural language interfaces to graph databases.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.