Stuart Fingerhut's profile

Using Stable Diffusion for more rendering variation

In this series of images I show a workflow where you can use an input image to drive visual variations in your output image with IPadapter. I used an iconic scene from Tron Legacy as my base image. 😎

- A base image is analyzed by controlnet and used for the base of the output image
- Your selected input image is analyzed and it's palette is applied to the output image
- Each image is generated in <20 seconds!

This could be handy for site specific concepts or exploring variations to renders with art, graphics, or anything really! 🚀

1x1 or group Gen AI tutoring is available. DM me for more info!

Using Stable Diffusion for more rendering variation
Published:

Owner

Using Stable Diffusion for more rendering variation

Published: