Stable-Diffusion-1.5-LCM-ONNX-RKNN2

Maintained By
happyme531

Stable-Diffusion-1.5-LCM-ONNX-RKNN2

PropertyValue
Base ModelTheyCallMeHex/LCM-Dreamshaper-V7-ONNX
FrameworkRKNN2, ONNX
Memory Usage (512x512)~5.6GB
Authorhappyme531

What is Stable-Diffusion-1.5-LCM-ONNX-RKNN2?

This is a specialized implementation of Stable Diffusion 1.5 using the Latent Consistency Model (LCM), optimized for RKNPU2 hardware acceleration. It's specifically designed to run efficiently on RK3588 platforms, offering impressive inference speeds while maintaining reasonable memory usage.

Implementation Details

The model demonstrates remarkable performance metrics, with inference speeds of 0.05s for text encoding, 2.36s/iteration for U-Net, and 5.48s for VAE Decoder at 384x384 resolution. At 512x512 resolution, it maintains efficient processing while requiring only about 5.6GB of memory.

  • Supports multiple resolution configurations (384x384 and 512x512)
  • Optimized for RKNPU2 hardware acceleration
  • Implements the Latent Consistency Model for improved generation quality
  • Compatible with ONNX format for maximum compatibility

Core Capabilities

  • Fast text-to-image generation on RK3588 hardware
  • Efficient memory utilization (5.2-5.6GB depending on resolution)
  • Support for customizable inference steps
  • Optimized performance for both 384x384 and 512x512 resolutions

Frequently Asked Questions

Q: What makes this model unique?

This model stands out for its specific optimization for RKNPU2 hardware, making it ideal for embedded AI applications on RK3588 platforms while maintaining the quality of Stable Diffusion 1.5 with LCM improvements.

Q: What are the recommended use cases?

The model is perfect for embedded systems using RK3588 chips where efficient image generation is needed. It's particularly suitable for applications requiring moderate resolution image generation with limited computational resources.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.