Yining Ma, Jingwen Li, Zhiguang Cao, Wen Song, Le Zhang, Zhenghua Chen, Jing Tang
Advances in Neural Information Processing Systems (Neurips)
Publication year: 2021

Recently, Transformer has become a prevailing deep architecture for solving vehicle 2 routing problems (VRPs). However, the original Transformer is less effective in 3 learning improvement models because its positional encoding (PE) method is not 4 suitable in representing VRP solutions. This paper presents a novel Dual-Aspect 5 Collaborative Transformer (DACT) to learn embeddings for the node and positional 6 features separately, instead of fusing them together as done in the original PE, so 7 as to avoid potential noises and incompatible attention scores. Moreover, the 8 positional features are embedded through a novel cyclic positional encoding (CPE) 9 method to capture the circularity and symmetry of VRP solutions. We train DACT 10 using Proximal Policy Optimization, and design a curriculum learning strategy for 11 better sample efficiency. We apply DACT to solve the traveling salesman problem 12 (TSP) and capacitated vehicle routing problem (CVRP). Results show that DACT 13 outperforms existing Transformer based improvement models, and exhibits better 14 capability of generalizing across different problem sizes. Code is available at https://github.com/yining043/VRP-DACT


邮箱地址不会被公开。 必填项已用*标注