Bilateral Collaboration with Large Vision-Language Models for Open Vocabulary Human-Object Interaction Detection
/ Authors
/ Abstract
Open vocabulary Human-Object Interaction (HOI) detection is a challenging task that detects all triplets of interest in an image, even those that are not pre-defined in the training set. Existing approaches typically rely on output features generated by large VisionLanguage Models (VLMs) to enhance the generalization ability of interaction representations. However, the visual features produced by VLMs are holistic and coarse-grained, which contradicts the nature of detection tasks. To address this issue, we propose a novel Bilateral Collaboration framework for open vocabulary HOI detection (BC-HOI). This framework includes an Attention Bias Guidance (ABG) component, which guides the VLM to produce fine-grained instance-level interaction features according to the attention bias provided by the HOI detector. It also includes a Large Language Model (LLM)-based Supervision Guidance (LSG) component, which provides fine-grained tokenlevel supervision for the HOI detector by the LLM component of the VLM. LSG enhances the ability of ABG to generate high-quality attention bias. We conduct extensive experiments on two popular benchmarks: HICO-DET and VCOCO, consistently achieving superior performance in the open vocabulary and closed settings. Code is available at https://github.com/MPI-Lab/BC-HOI.
Journal: 2025 IEEE/CVF International Conference on Computer Vision (ICCV)