This repository hosts supplementary materials of the article Creating Stable Diffusion 2.0 Service With BentoML And Diffusers.
Prompt: Kawaii low poly grey American shorthair cat character, 3D isometric render, ambient occlusion, unity engine, lively color Negative prompt: low-res, blurry, mutation, deformed
Currently we have three examples:
sd2/contains a service withtxt2img/endpoint utilizingstabilityai/stable-diffusion-2sd2_mega/contains a service withtxt2img/andimg2img/endpoints utilizingstabilityai/stable-diffusion-2and diffusers' custom piplineanything_v3/contains a service withtxt2img/endpoint utilizingLinaqruf/anything-v3.0
We recommend running these services on a machine equipped with a Nvidia graphic card and CUDA Toolkit installed.
First let's prepare a virtual environment and install requried depedencies
python3 -m venv venv/ && source venv/bin/activate
pip install -U -r requirements.txt
You may need to authorize your huggingface account to download models, to do that, run:
pip install -U huggingface_hub
huggingface-cli login
then:
- to import
stabilityai/stable-diffusion-2, runpython3 import_model.py - to import
Linaqruf/anything-v3.0, runpython3 import_anything_v3
After the model is imported, you can go into sd2/, sd2_mega or anything_v3 and follow the readme inside the folder to start the service and make a docker image for each service
