I had a Reddit user ask [whether our LivePortrait node supports audio-based video](https://www.reddit.com/r/StableDiffusion/comments/1lkfr0o/comment/ndfxavz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button). Now that we’re close to merging improved audio support in [PR #357](https://github.com/livepeer/comfystream/pull/357), could we add an example of an **audio-based LivePortrait workflow** to the documentation?