Paper: Zero-Shot Audiovisual Segmentation
Project Page
Arxiv Link
Bridging Audio and Vision: Zero-Shot Audiovisual Segmentation by Connecting Pretrained Models
Audiovisual segmentation (AVS) aims to identify visual regions corresponding to sound sources, playing a vital role in video understanding, surveillance, and human-computer interaction. Traditional AVS methods depend on large-scale pixel-level annotations, which are costly and time-consuming to obtain. To address this, we propose a novel zero-shot AVS framework that eliminates task-specific training by leveraging multiple pretrained models. Our approach integrates audio, vision, and text representations to bridge modality gaps, enabling precise sound source segmentation without AVS-specific annotations. We systematically explore different strategies for connecting pretrained models and evaluate their efficacy across multiple datasets.
Citation
@article{lee2025bridging,
title={Bridging Audio and Vision: Zero-Shot Audiovisual Segmentation by Connecting Pretrained Models},
author={Lee, Seung-jae and Seo, Paul Hongsuck},
journal={arXiv preprint arXiv:2506.06537},
year={2025}
}