LyCORIS-experiments

Maintained By
alea31415

LyCORIS-experiments

PropertyValue
LicenseCreativeML OpenRAIL-M
Authoralea31415
Default SettingsLoHA net dim 8, conv dim 4, alpha 1

What is LyCORIS-experiments?

LyCORIS-experiments is a comprehensive investigation into different training configurations for character and style transfer learning in stable diffusion models. The project explores various architectures including LoRA, LoHA, and LoCon, with detailed analysis of hyperparameters, base models, and training methodologies.

Implementation Details

The experiments use a default configuration of LoHA with network dimension 8, convolution dimension 4, and alpha 1. Training employs a constant learning rate of 2e-4 with Adam8bit optimizer at 512 resolution and clip skip 1.

  • Multiple character training examples including Anisphia, Euphyllia, Tilty, and OyamaMahiro/Mihari
  • Extensive style transfer experiments across different base models
  • Comparative analysis of LoRA, LoHA, and LoCon architectures
  • Investigation of training resolution, learning rates, and optimizer effects

Core Capabilities

  • Character fine-tuning with style preservation
  • Style transfer across different base models
  • Base model compatibility analysis
  • Optimization strategy evaluation

Frequently Asked Questions

Q: What makes this model unique?

This experiment provides comprehensive insights into the behavior of different LoRA variants and training configurations, with detailed analysis of model transfer capabilities and style preservation across different base models.

Q: What are the recommended use cases?

The model is particularly useful for researchers and practitioners looking to understand optimal training configurations for character and style transfer learning. It provides valuable insights for choosing base models and training parameters based on specific use cases.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.