The recent success of the generative model shows that leveraging the
multi-modal embedding space can manipulate an image using text information.
However, manipulating an image with other sources rather than text, such as
sound, is not easy due to the dynamic characteristics of the sources.
Especially, sound can convey vivid emotions and dynamic expressions of the real
world. Here, we propose a framework that directly encodes sound into the
multi-modal (image-text) embedding space and manipulates an image from the
space. Our audio encoder is trained to produce a latent representation from an
audio input, which is forced to be aligned with image and text representations
in the multi-modal embedding space. We use a direct latent optimization method
based on aligned embeddings for sound-guided image manipulation. We also show
that our method can mix text and audio modalities, which enrich the variety of
the image modification. We verify the effectiveness of our sound-guided image
manipulation quantitatively and qualitatively. We also show that our method can
mix different modalities, i.e., text and audio, which enrich the variety of the
image modification. The experiments on zero-shot audio classification and
semantic-level image classification show that our proposed model outperforms
other text and sound-guided state-of-the-art methods.