TY - JOUR AU1 - Pei, Gensheng AU2 - Chen, Tao AU3 - Wang, Yujia AU4 - Cai, Xinhao AU5 - Shu, Xiangbo AU6 - Zhou, Tianfei AU7 - Yao, Yazhou AB - Abstract:The CLIP model has demonstrated significant advancements in aligning visual and language modalities through large-scale pre-training on image-text pairs, enabling strong zero-shot classification and retrieval capabilities on various domains. However, CLIP's training remains computationally intensive, with high demands on both data processing and memory. To address these challenges, recent masking strategies have emerged, focusing on the selective removal of image patches to improve training efficiency. Although effective, these methods often compromise key semantic information, resulting in suboptimal alignment between visual features and text descriptions. In this work, we present a concise yet effective approach called Patch Generation-to-Selection to enhance CLIP's training efficiency while preserving critical semantic content. Our method introduces a gradual masking process in which a small set of candidate patches is first pre-selected as potential mask regions. Then, we apply Sobel edge detection across the entire image to generate an edge mask that prioritizes the retention of the primary object areas. Finally, similarity scores between the candidate mask patches and their neighboring patches are computed, with optimal transport normalization refining the selection process to ensure a balanced similarity matrix. Our approach, CLIP-PGS, sets new state-of-the-art results in zero-shot classification and retrieval tasks, achieving superior performance in robustness evaluation and language compositionality benchmarks. TI - Seeing What Matters: Empowering CLIP with Patch Generation-to-Selection JF - Computing Research Repository DO - 10.48550/arxiv.2503.17080 DA - 2025-03-21 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/seeing-what-matters-empowering-clip-with-patch-generation-to-selection-0THLHBd8Lm VL - 2025 IS - 2503 DP - DeepDyve ER -