Run Stable Diffusion with companion models on a GPU-enabled Kubernetes Cluster - complete with a WebUI and automatic model fetching for a 2 step install that takes less than 2 minutes (excluding download times).
If you are on a restricted internet connection and want to save bandwidth, you can also manually build the dockerfile and push it to a local container registry.
Uses the nvidia/cuda image as a base.
gpu-operator, bundling CUDA librariesgpu-operator set up successfullyhelm installed locallyhelm repo add amithkk-sd https://amithkk.github.io/stable-diffusion-k8svalues.yaml with customized settings
nodeAffinity, cliArgs (see below) and ingress settings (that will allow you to access this externally without needing to kubectl port-forward)helm install amithkk-sd/stable-diffusion -f <your-values.yaml>Wait for the containers to come up and follow the instructions returned by Helm to connect. This may take a while as it has to download a ~10GiB docker image and ~5Gib of models
By extending your values.yaml you can change the cliArgs key, which contains the arguments that will be passed to the WebUI. By default: --extra-models-cpu --optimized-turbo are given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode.
You can find the full list of arguments here
--optimize and --optimize-turbo flags and add --no-half to cliFlags when installing, more info here.--precision full --no-half.The author(s) of this project are not responsible for any content generated using this interface.
Special thanks to everyone behind these awesome projects, without them, none of this would have been possible: