Published February 26, 2025
| Version v1
Conference paper
Dynamic hierarchical token merging for vision transformers
Contributors
Others:
- Département Systèmes et Circuits Intégrés Numériques (DSCIN (CEA, LIST)) ; Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA)) ; Direction de Recherche Technologique (CEA) (DRT (CEA)) ; Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Technologique (CEA) (DRT (CEA)) ; Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)
- Université Côte d'Azur (UniCA)
Description
Vision Transformers (ViTs) have achieved impressive results in computer vision, excelling in tasks such as image classification, segmentation, and object detection. However, their quadratic complexity $O(N^2)$, where $N$ is the token sequence length, poses challenges when deployed on resource-limited devices. To address this issue, dynamic token merging has emerged as an effective strategy, progressively reducing the token count during inference to achieve computational savings. Some strategies consider all tokens in the sequence as merging candidates, without focusing on spatially close tokens. Other strategies either limit token merging to a local window, or constrains it to pairs of adjacent tokens, thus not capturing more complex feature relationships. In this paper, we propose Dynamic Hierarchical Token Merging (DHTM), a novel token merging approach, where we advocate that spatially close tokens share more information than distant tokens and consider all pairs of spatially close candidates instead of imposing fixed windows. Besides, our approach draws on the principles of Hierarchical Agglomerative Clustering (HAC), where we iteratively merge tokens in each layer, fusing a fixed number of selected neighbor token pairs based on their similarity. Our proposed approach is off-the-shelf, i.e., it does not require additional training. We evaluate our approach on the ImageNet-1K dataset for classification, achieving substantial computational savings while minimizing accuracy reduction, surpassing existing token merging methods.
Abstract
International audienceAdditional details
Identifiers
- URL
- https://hal.science/hal-04885469
- URN
- urn:oai:HAL:hal-04885469v1
Origin repository
- Origin repository
- UNICA