boreal-hl-v1

Maintained By
kudzueye

Boreal-HL-v1

PropertyValue
Authorkudzueye
Model TypeLoRA (Low-Rank Adaptation)
Base ModelHunyuan
Training Data150 public domain photos (early 2010s)
Model URLHuggingFace

What is boreal-hl-v1?

Boreal-HL-v1 is a specialized LoRA adaptation designed to enhance the Hunyuan model's generation capabilities. It focuses on improving detail generation, particularly in aspects like depth of field, realistic skin texture, and lighting quality. The model is capable of generating both realistic short video clips and single-frame images.

Implementation Details

The model was trained with specific parameters including 600 epochs, 4 gradient accumulation steps, and 100 warmup steps. It implements a LoRA architecture with rank 32 and bfloat16 dtype, utilizing the adamw optimizer with a learning rate of 0.0002.

  • Training dataset: 150 carefully selected public domain photos from early 2010s
  • Adapter type: LoRA with rank 32
  • Optimizer: AdamW with beta values [0.9, 0.99]
  • Weight decay: 0.01

Core Capabilities

  • Enhanced depth of field in generations
  • Improved realistic skin texture rendering
  • Better lighting and detail preservation
  • Compatible with both video and image generation
  • Supports high-resolution outputs (512x512 minimum recommended)

Frequently Asked Questions

Q: What makes this model unique?

This model specializes in enhancing the detail and realism of Hunyuan generations, particularly in aspects like depth of field and texture quality. It's designed to work with both video and still image generation, making it versatile for different use cases.

Q: What are the recommended use cases?

The model works best with:

  • Generation of realistic short video clips
  • High-detail single-frame image generation
  • Projects requiring enhanced depth of field and lighting
  • Cases where realistic skin texture is important

Q: What are the recommended settings?

For optimal results: Use a LoRA strength of around 0.6, keep steps over 35, maintain minimum resolution above 512x512, and experiment with guidance values between 3.5-12.5. Higher guidance and strength may lead to more consistent but less varied outputs.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.