diff --git "a/9NE1T4oBgHgl3EQfnwSt/content/tmp_files/load_file.txt" "b/9NE1T4oBgHgl3EQfnwSt/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/9NE1T4oBgHgl3EQfnwSt/content/tmp_files/load_file.txt" @@ -0,0 +1,1065 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf,len=1064 +page_content='Published as a conference paper at ICLR 2023 BQ-NCO: BISIMULATION QUOTIENTING FOR GENER- ALIZABLE NEURAL COMBINATORIAL OPTIMIZATION Darko Drakulic1 Sofia Michel1 Florian Mai2,* Arnaud Sors1 Jean-Marc Andreoli1 1 NAVER Labs Europe {firstname.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='lastname}@naverlabs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='com 2 Idiap Research Institute and EPFL florian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='mai@idiap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='ch Work was done as part of an internship at NAVER Labs Europe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ABSTRACT Despite the success of Neural Combinatorial Optimization methods for end-to- end heuristic learning, out-of-distribution generalization remains a challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In this paper, we present a novel formulation of combinatorial optimization (CO) problems as Markov Decision Processes (MDPs) that effectively leverages sym- metries of the CO problems to improve out-of-distribution robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Starting from the standard MDP formulation of constructive heuristics, we introduce a generic transformation based on bisimulation quotienting (BQ) in MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This transformation allows to reduce the state space by accounting for the intrinsic symmetries of the CO problem and facilitates the MDP solving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We illustrate our approach on the Traveling Salesman, Capacitated Vehicle Routing and Knapsack Problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We present a BQ reformulation of these problems and introduce a sim- ple attention-based policy network that we train by imitation of (near) optimal solutions for small instances from a single distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We obtain new state-of- the-art generalization results for instances with up to 1000 nodes from synthetic and realistic benchmarks that vary both in size and node distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 1 INTRODUCTION Combinatorial Optimization problems are crucial in many application domains such as transporta- tion, energy, logistics, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Because they are generally NP-hard (Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 1997), their resolution at real-life scales is mainly done by heuristics, which are efficient algorithms that generally produce good quality solutions (Boussa¨ıd et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' However, strong heuristics are generally problem- specific and designed by domain experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Neural Combinatorial Optimization (NCO) is a relatively recent line of research that focuses on using deep neural networks to learn such heuristics from data, possibly exploiting information on the specific distribution of problem instances of interest (Bengio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Cappart et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Despite the impressive progress in this field over the last few years, their out-of-distribution generalization, especially to larger instances, remains a major hurdle (Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Manchanda et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In this paper, we are interested in constructive NCO methods, which build a solution incrementally, by applying a sequence of elementary steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' These methods are often quite generic, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' the seminal papers by Khalil et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2017);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Most CO problems can indeed be rep- resented in this way, although the representation is not unique as the nature of the steps is, to a large extent, a matter of choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Given a choice of step space, solving the CO problem amounts to computing an optimal policy for sequentially selecting the steps in the construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This task can typically be performed in the framework of Markov Decision Processes (MDP), through imitation or reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The exponential size of the state space, inherent to the NP-hardness of combinatorial problems, usually precludes other methods such as (tabular) dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Whatever the learning method used to solve the MDP, its efficiency, and in particular its out-of- distribution generalization capabilities, greatly depends on the state representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The state space is often characterized by deep symmetries, which, if they are not adequately identified and lever- aged, hinders the training process by forcing it to independently learn the policy at states which in fact are essentially the same (modulo some symmetry).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='03313v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='LG] 9 Jan 2023 Published as a conference paper at ICLR 2023 In this work, we investigate a type of symmetries which often occurs in MDP formulations of con- structive CO heuristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We first introduce a generic framework to systematically derive a naive CO problem-specific MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We formally demonstrate the equivalence between solving the MDP and solving the CO problem and highlight the flexibility of the MDP formulation, by defining a mini- mal set of conditions for the equivalence to hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Our framework is general and easy to specialize to encompass previously proposed learning-based construction heuristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We then show that the state space of this naive MDP is inefficient because it fails to capture deep symmetries of the CO problem, even though such symmetries are easy to identify.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Therefore, we propose a method to transform the naive MDP, based on the concept of bisimulation quotienting (BQ), in order to get a reduced state space, which is easier to process by the usual (approximate) MDP solvers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We il- lustrate our approach on three well-known CO problems, the Traveling Salesman Problem (TSP), the Capacitated Vehicle Routing Problem (CVRP) and Knapsack Problem (KP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Furthermore, we propose a simple transformer-based architecture for these problems, that we train by imitation of expert trajectories derived from (near) optimal solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In particular, we show that our model is well-suited for our BQ formulation: it spends a monotonically increasing amount of computation as a function of the subproblem size (and therefore complexity), in contrast to most previous models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Finally, extensive experiments confirm the validity of our approach, and in particular its state-of-the- art out-of-distribution generalization capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In summary, our contributions are as follows: 1) We present a generic and flexible framework to define a construction heuristic MDP for arbitrary CO problems;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 2) We propose a method to simplify commonly used “naive” MDPs for constructive NCO via symmetry-focused bisimulation quotienting;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 3) We design an adequate transformer-based archi- tecture for the new MDP, for the TSP, CVRP and KP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 4) We achieve state-of-the-art generalization performance on these three problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 2 COMBINATORIAL OPTIMIZATION AS A MARKOV DECISION PROBLEM In this section, we present a generic formalization of constructive heuristics which underlies their MDP formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A deterministic CO problem is denoted by a pair (F, X), where F is its ob- jective function space and X its (discrete) solution space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A problem instance f∈F is a mapping f:X→R∪{∞}, with the convention that f(x)=∞ if x is infeasible for instance f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A solver for problem (F, X) is a functional: SOLVE : F → X satisfying SOLVE(f) = arg min x∈X f(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (1) Incremental solution construction Constructive heuristics for CO problems build a solution se- quentially, starting at an empty partial solution and expanding it at each step until a finalized solution is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Many NCO approaches are based on a formalization of that process as an MDP, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Khalil et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2017);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Such an MDP can be obtained, for an arbitrary CO problem (F, X), using the following ingredients: Steps: T is a set of available steps to construct solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A partial solution is a pair (f, t1:n) of a problem instance f∈F and a sequence of steps t1:n∈T ∗ (the set of sequences of elements of T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Observe that a partial solution (in F×T ∗) is not a solution (in X), but may represent one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Representation: SOL:F×T ∗→X∪{⊥} is a mapping that takes a partial solution and returns ei- ther a feasible solution (in which case the partial solution is said to be finalized), or ⊥ otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Evaluation: VAL:F×T ∗→R∪{∞} is a mapping that takes a partial solution and returns an esti- mate of the minimum value of its expansions into finalized solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' If the returned value is finite, the partial solution is said to be admissible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In order to define an MDP using these ingredients, we assume they satisfy the following axioms: ∀f∈F, x∈X, f(x) < ∞ ⇔ ∃t1:n ∈ T ∗ such that SOL(f, t1:n) = x, (2a) ∀f∈F, t1:n∈T ∗, SOL(f, t1:n) ̸= ⊥ ⇒ ∀m∈{1:n−1}, SOL(f, t1:m) = ⊥, (2b) ∀f∈F, t1:n∈T ∗, x∈X, SOL(f, t1:n) = x ⇒ � VAL(f, t1:n) = f(x), ∀m ∈ {1:n−1}, VAL(f, t1:m) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2c) Equation 2a states that the feasible solutions are exactly those represented by a finalized partial solution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Equation 2b states that if a partial solution is finalized then none of its preceding partial solutions in the construction can also be finalized;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Equation 2c states that the evaluation of a finalized 2 Published as a conference paper at ICLR 2023 partial solution is the value of the solution it represents, and all its preceding partial solutions are admissible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We call a triplet ⟨T , SOL, VAL⟩ satisfying the above axioms a specification of problem (F, X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Note that a specification is not intrinsic to the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The step space T results from a choice of how to construct a solution sequentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Once T is chosen, SOL is determined, and must satisfy Equation 2a and 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Then VAL is only loosely constrained by Equation 2c, and can be chosen among a wide range of alternatives, including the following straightforward, uninformed one and the ideal, but intractable one (and, more likely, somewhere in between these two extremes): VALuninformed(f, t1:n) =def f(x) if [SOL(f, t1:n) = x ̸= ⊥] else 0, VALideal(f, t1:n) =def min{f(x)|x ∈ X, ∃u1:m ∈ T ∗ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' SOL(f, t1:nu1:m) = x}, with the convention min ∅=∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Value 0 in the uninformed case can be replaced by any constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Solution construction as an MDP Using a specification ⟨T , SOL, VAL⟩ of problem (F, X), one can derive a “naive” MDP as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' States are partial solutions (in F×T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' actions are steps (in T );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' a state is terminal if it is a finalized partial solution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' transitions: action u∈T applied to a non-terminal state (f, t1:n) leads to state (f, t1:nu) where u is appended to the sequence so far, with reward VAL(f, t1:n)−VAL(f, t1:nu), conditioned on VAL(f, t1:nu) being finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Note that VAL has the double role of providing a reward and specifying the set of allowed actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The number of these is expected to be linear, or at worst polynomial, in the size of the instance, since picking a step should not be as complex as solving the whole problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Now, assume we have access to a generic solver SOLVEMDP, which, given an MDP M and one of its states so, returns an optimal trajectory starting at that state, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arg maxτ R(τ) where τ ranges over the M-trajectories starting at so and ending in a terminal state, and R(τ) denotes its cumulated reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Note that because we are dealing with deterministic MDPs, looking for an optimal policy is the same as looking for an optimal trajectory for a given set of initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' That is why SOLVEMDP is defined here directly in terms of trajectories rather than policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' SOLVEMDP can then be specialized into a solver for the specific CO problem (F, X): Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let Mo be the naive MDP obtained from specification ⟨T , SOL, VAL⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The proce- dure defined as follows (where ϵ denotes the empty sequence) satisfies the requirement of equation 1: SOLVE(f ∈ F) =def {SOL(s)|s is the last state of the trajectory SOLVEMDP(Mo, (f, ϵ))}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In other words, solving the naive MDP is equivalent to solving the CO problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The detailed proof of Proposition 1 is in Appendix F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Of course, procedure SOLVEMDP may be approximate, in which case so is procedure SOLVE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Moreover, its performance depends on that of SOLVEMDP, esp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' its out- of-distribution generalization capacity, but also on the choice of specification, esp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' of action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' It is a distinguishing feature of CO from an MDP perspective that the action space is not prescribed by the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The impact of the choice of the VAL mapping depends on the type of learning used by SOLVEMDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' When SOLVEMDP learns by reinforcement, VAL is essential, as it provides the rewards which guide the resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For example, VALuninformed leads to the notoriously hard case of sparse rewards, while VALideal (were it tractable) would lead to the trivial case where a myopic policy (greedy in its immediate reward) is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Although we do not provide a generic method to design VAL, we argue that there are natural candidates, typically based on extending the objective function to partial solutions (not just finalized ones).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' When SOLVEMDP learns by imitation instead, the choice of VAL has a much more limited impact: it only serves to define the allowed actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The critical factor in that case is the construction of the training dataset of expert trajectories to imitate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Example on TSP Consider the widespread CO problem known as the Traveling Salesman Prob- lem (TSP) in a Euclidian space V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A TSP solution (in X) is a path, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' a finite sequence of pairwise distinct nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A TSP instance (in F) is given by a finite set D of nodes as points in V , and maps any solution (path) to the length of that path (closed at its ends) if it visits exactly all the nodes of D, and ∞ otherwise (infeasible solutions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A simple specification ⟨T , SOL, VAL⟩ for the TSP is given by: the step space T is the set of nodes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' for a given instance f and sequence t1:n of steps, SOL(f, t1:n) is either the sequence t1:n if it forms 3 Published as a conference paper at ICLR 2023 u a v u a v u a v step|u−(a+v) step|u−(a+v) ≡Φ ≡Φ step|u−(a+v) Figure 1: An example of bisimulation commutation in TSP-MDP, and the corresponding path-TSP- MDP transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The step is the same in all three transitions: it is the end node of the dashed arrow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' And the reward is also the same: it depends only on the distances a, u, v, and not on any of the previously visited nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' a path which visits exactly all the nodes of f, or ⊥ otherwise;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' and VAL(f, t1:n) is either the length of path t1:n (closed at its ends) if it forms a path which visits only nodes of f (maybe not all), or ∞ otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' It is easy to show that we thus obtain a specification (as defined by the axioms above) of the TSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In TSP-MDP, the naive MDP obtained from it, the reward of taking action u (a node) at state (f, t1:n) is δf(tn, t1)−(δf(tn, u)+δf(u, t1)) where δf is the node distance measured on the corresponding points of V in f, conditioned on t1:nu being pairwise distinct nodes of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Observe that when allowed, the reward depends only on the start and end nodes t1, tn of the step sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 3 BISIMULATION QUOTIENTING FOR COMBINATORIAL OPTIMIZATION In our context of deterministic CO problems and therefore deterministic MDPs, the general notion of bisimilarity is simplified (Givan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2003): two states are said to be bisimilar if they spawn exactly the same action-reward sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Likewise, the notion of a binary relation R on states being a bisimulation reduces to a commutation between the (deterministic) transitions of the MDP and that relation: if s1Rs2 and action a applied to state s1 leads to state s′ 1 with reward r, then action a applied to state s2 leads to a state s′ 2 with the same reward r, and s′ 1Rs′ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' An illustration is given in Fig 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Bisimilarity is equivalently defined as the largest bisimulation (see Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Bisimilarity-induced symmetries In the naive MDP obtained from a specification of a given CO problem, a state is a partial solution and carries the whole information about the “past” decisions (steps) leading to it, which may not all be useful for the “future” decisions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' the completion of that partial solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Consider for example the following two states in TSP-MDP, in which the sequence of steps of the partial solution is represented as a directed path in red among some of the problem instance nodes: s1 s2 Observe that s1, s2 differ only in the inner nodes of the red path (black diamond-shaped nodes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Now, it is easy to see that the successful completions of these two partial solutions are identical: they each consist of a path visiting the (same) unvisited (blue) nodes, starting at the end node of the red path and ending at its start node, with the same rewards defined by VAL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Consequently, in the MDP, the two states s1, s2 spawn exactly the same action-reward sequences and form a bisimilar pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This is the kind of deep symmetries of the problem which we want the MDP to leverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Of course, there exist other kinds of symmetries, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' rotational symmetries: if s2 is obtained from s1 by applying an isometric transformation to all the points in the problem instance, then s1, s2 also form a bisimilar pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' However, the latter symmetry is specific to the Euclidian version of the TSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We focus here on the former kind of symmetry as it is more general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Although it has previously been noted for routing problems (Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021b), we show here that it is an inherent characteristic of constructive CO approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 4 Published as a conference paper at ICLR 2023 Bisimulation quotienting A classical result on MDPs (Givan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2003) states that all such symmetries in any MDP can be leveraged by quotienting it by its bisimilarity relation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' the set of all bisimilar pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Of course, there is no free lunch: constructing the bisimilarity of an MDP is in general intractable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Still, the result remains valuable because it holds for any bisimulation, not just the bisimilarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Therefore one can control the amount of symmetries captured in the quotienting by carefully choosing the bisimulation, trading off its closeness to full bisimilarity for tractability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We now assume that, for a given CO problem (F, X) we have access not only to a specification ⟨T , SOL, VAL⟩ with its associated naive MDP, but also to a mapping Φ:F×T ∗→ ˆS from partial solutions to some new space ˆS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Typically, Φ(f, t1:n) should capture, within the partial solution (f, t1:n), a piece of information as small as possible but sufficient to determine the set of action- reward sequences it spawns in the MDP – in other words, a summary of its “past” which is sufficient to determine its “future”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We can then define an equivalence relation ≡Φ where two partial solutions are equivalent if they have the same image by Φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For it to be a bisimulation, Φ must satisfy: ∀s1, s2∈F×T ∗, Φ(s1)=Φ(s2) ⇒ � ∀u∈T , Φ(s1u)=Φ(s2u) and VAL(s1)−VAL(s1u)=VAL(s2)−VAL(s2u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (3) Under that assumption, we can construct a new MDP (the quotient of the original one by the bisim- ulation) which is equivalent, as far as policy optimization is concerned, to the original one, but captures more symmetries of the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This allows to reduce the state space and should lead to a better performance, whatever the generic MDP solver used afterwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Furthermore, by construc- tion, the equivalence classes are in one-to-one correspondence with the states in ˆS, so that the new MDP can be formulated on that space directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Application to the TSP, CVRP and KP Let Φ be the mappings from TSP-MDP states (TSP states for short) into new objects called “path-TSP” states, informally described by the following diagram: TSP state path-TSP state Φ The inner nodes (black diamonds) on the red path of visited nodes in the TSP state are removed, leaving only the two ends of the red path which constitute two distinguished nodes in the path-TSP state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Mapping Φ has been designed to satisfy equation 3, so it induces a bisimulation on TSP-MDP (see Figure 1), and TSP-MDP can be turned into an equivalent “path-TSP-MDP” on path-TSP states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This path-TSP-MDP can be viewed as solving a variant of the TSP known as path-TSP, hence its name.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' However it is not the naive MDP for that variant since it forgets as it progresses, while naive MDPs always accumulate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' With the CVRP, we define a step as the pair of a node and a binary flag specifying whether that node is reached via the depot or directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We can define a mapping Φ similarly to the TSP case, except it is not sufficient to summarize the “past” (the visited nodes) by just the two ends of their path: to guarantee equation 3 and the bisimulation property, an additional piece of information must be preserved from the past, namely the remaining capacity at the end of the current path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For the KP, intuitively, the summary of the “past” is captured by the remaining items and the remaining knapsack capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This idea can be leveraged to design a bisimulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Formal descriptions of the specifications and bisimulation quotienting for the CVRP and KP are provided in Appendices A and B, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 4 NEURAL ARCHITECTURE FOR PATH-TSP We now describe our proposed policy network for the path-TSP-MDP above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Figure 4 (Appendix) provides a quick overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The models for path-CVRP and BQ-KP differ only slightly and are presented in Appendix A and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Most neural models for TSP utilize an encoder-decoder architecture, in which the encoder computes a representation of the entire graph once, and the decoder constructs a solution by taking into consideration the representation of the whole graph 5 Published as a conference paper at ICLR 2023 and the partial solution, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Attention Model (Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2019), or PointerNetworks (Vinyals et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In our case, the path-TSP formulation allows us to forget the nodes in the graph that have already been visited, except the distinguished origin and destination nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' As a corollary, it also requires re-encoding the remaining nodes at each prediction step – hence removing the need for a separate auto-regressive decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' To encode a path-TSP state, we use a Transformer model (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Each node is represented by its (x, y) coordinates, so that the input feature matrix for an N-node state is an N×2 matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We embed these features via a linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The remainder of the encoder is based on Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2017) with the following differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' First, we do not use positional encoding since the input nodes have no order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Instead, we learn an origin (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' destination) embedding that is added to the feature embedding of the origin (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' destination) node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Second, we use ReZero (Bachlechner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021) normalization, which leads to more stable training and better performance in our experiments (see ablation study in Appendix D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Finally, a last linear layer projects the encoder’s output into a vector of size N, from which unfea- sible actions, corresponding to the origin and destination nodes, are masked out, before applying a softmax operator so as to interpret the scalar node values for all allowed nodes as action probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Training We train our model by imitation of expert trajectories, using a plain cross-entropy loss (behaviour cloning).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Such trajectories are extracted from pre-computed optimal (or near optimal) solutions for instances of a (relatively small) fixed size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Note that (optimal) solutions are not directly in the form of trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Equation 2a guarantees that a trajectory exists for any solution, but it is usually far from unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Besides, optimal solutions are costly, so we seek to make the most out of each of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In the TSP case, we observe that given an optimal tour, any sub-path of that tour is also an optimal solution to the associated path-TSP sub-problem, hence amenable to our path-TSP model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We therefore form minibatches by first sampling a number n between 4 and N (path-TSP problems with less than 4 nodes are trivial), then sampling sub-paths of length n – the same n for all the minibatch entries so as to simplify batching – from the initial solution set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For the CVRP, the procedure is similar, except that, first, the extracted sub-paths must end at the depot, and, second, they can follow the sub-tours of the full solution in any order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We observed experimentally that the way that order is sampled has an impact on the performance (see Appendix E).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Complexity Because of the quadratic complexity of self-attention, and the fact that we call our model at each construction step, the total complexity is O(N 3)1 where N is the instance size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Note that closely related Transformer-based models such as the TransformerTSP (Bresson & Laurent, 2021) and the Attention Model (Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2019) have a total complexity of O(N 2)2 At each decision step, for t remaining nodes, our model has a budget of O(t2) compute whereas previous models only spend O(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We believe that this is a useful inductive bias, which enables better generalization in particular for larger problem sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This hypothesis is supported by the fact that replacing the self-attention component with a linear time alternative (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', spending O(t) operations per step) drastically degrades the generalization ability to larger instances, as we show in Appendix D, Summary By reformulating TSP-MDP into path-TSP-MDP, the state is made to contain only a very concise summary of the “past” of a partial solution (how it was formed) as two distinguished nodes, but sufficient to determine its “future” (how it can be completed).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Furthermore, at train time, we sample optimal solutions and associated path-TSP states amongst all the possible infixes of solutions of full problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' These proposed modifications go hand-in-hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Thanks to the transformation to path-TSP-MDP, our model enables better generalization in two important ways: (i) Due to re- encoding at each step, the encoder produces a graph representation that is specific to the current path-TSP-MDP state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Graphs in these states vary in size and distribution, implicitly encouraging the model to work well across sizes and node distributions, and generalize better than if such variations were not seen during the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In this regard, our model is similar to the SW-AM model (Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021b), except that they only approximate the re-embedding process in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (ii) By sampling subsequences from our training instances, we automatically get an augmented dataset, which some previous models had to explicitly design their model for (Kwon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 1More precisely, the complexity is proportional to �N t=1 t2 = N(N + 1)(2N + 1)/6 hence the O(N 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 2After an encoder of complexity O(N 2), the decoder has linear complexity O(N − t) at step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 6 Published as a conference paper at ICLR 2023 5 RELATED WORK NCO approaches Many NCO approaches construct solutions sequentially, via auto-regressive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Starting with the seminal work by Vinyals et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2015), which proposed the Pointer network that was based on RNNs and trained in a supervised way, Bello et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2017) trained the same model by RL for the TSP and Nazari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2018) adapted it for the CVRP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2019) introduced an attention-based encoder-decoder architecture (AM) trained with RL to solve several variants of routing problems – which is reused by Kwon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2021) along with a few extensions (POMO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' TransformerTSP Bresson & Laurent (2021) use a similar architecture with a different decoder on TSP problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Another line of works is concerned with directly producing a heat-map of solution segments: Nowak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2018) trained a Graph Neural Network in a supervised manner to output an adjacency matrix, which is converted into a feasible solution using beam search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2019) followed a similar framework and trained a deep Graph Convolutional Network instead, that was used by (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2020) as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Step-wise methods Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2020) first pointed out the limitation of the original AM (Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2019) approach in representing the dynamic nature of routing problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' They proposed to update the encoding after each subtour completion for the CVRP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2021b) proposed a similar step-wise strategy but the encodings recomputed after each decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In practice, their architecture is the most similar to ours for the TSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' However, thanks to our principled MDP transformations based on bisimulation quotienting, we obtain a superior representation for CVRP: In contrast to our approach, their CVRP architecture only provides censored information by omitting the remaining vehicle capacity and simply restricting the state to the nodes whose demand is below the remaining capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2020) extended on this idea by proposing the Multi-Decoder Attention Model (MDAM) that in particular contains a special layer to efficiently approximate the re-embedding process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' As MDAM constitutes the most advanced version, we employ it as a baseline in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Generalizable NCO Generalization to different instances distributions, and esp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' larger instances, is regarded as one of the major limitations of current NCO approaches (Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Mazyavkina et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2020) trained a Graph Convolution model in a supervised manner on small graphs and used it to solve large TSP instances, by applying the model on sampled subgraphs and us- ing an expensive MCTS search to improve the final solution (Att-GCN+MCTS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' While this method achieves excellent generalization on TSP instances, MCTS requires a lot of computing resources and is essentially a post-learning search strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Geisler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2022) investigate the robustness of NCO solvers through adversarial attacks and find that existing neural solvers are highly non-robust to out-of-distribution examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' They conclude that one way to address this issue is through adver- sarial training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In particular, Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2021a) trains a GAN to generate instances that are difficult to solve for the current model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Manchanda et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2022) take a different approach and leverage meta- learning to learn a model in such a way that it is easily adaptable to new distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Accounting for symmetries in a given CO problem is a powerful idea to boost the generalization performance of neural solvers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Both Kwon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2021) and Kim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2022) make use of solution symmetricity as part of their loss function during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Problem instance symmetry can be used at training time to augment the dataset (Kwon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021) or enforce robust representations (Kim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2022), or it can be used at inference time to augment the set of solutions (Kwon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Please note that all of the above are orthogonal to our approach: rather than augmenting data or changing the training paradigm, our approach simplifies the state space by transforming the MDP, which has beneficial effects irrespective of the method of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 6 EXPERIMENTS To verify the effectiveness of our method, we test it on TSP, CVRP and KP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This section presents experimental results for TSP and CVRP, while results for KP are presented in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We train our model and all baselines on synthetic TSP and CVRP instances of size 100, generated as in Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We choose graphs of size 100 because it is the largest size for which (near) optimal solutions are still reasonably fast to obtain, and such training datasets are commonly used in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Then, we evaluate trained models on synthetic instances of size 100, 200, 500 and 1K generated from the same distribution, as well as the standard TSPLib and CVRPLib datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 7 Published as a conference paper at ICLR 2023 Hyperparameters and training procedure We use the same hyperparameters for all problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The model has 9 layers, each built with 8 attention heads with embedding size of 128 and dimension of feed-forward layer of 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Our model is trained on 1 million instances with 100 nodes split into batches of size 1024, for 1000 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Solutions of these problems are obtained by using the Concorde solver (Applegate et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2015) for TSP and LKH heuristic (Helsgaun, 2017) for CVRP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We use Adam (Kingma & Ba, 2017) as optimizer with an initial learning rate of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='5e−4 and decay of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='98 every 20 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Evaluation We compare our model with existing state-of-the-art methods: OR-Tools (Perron & Furnon, 2022), LKH (Helsgaun, 2017), and Hybrid Genetic Search (HGS) for the CVRP (Vidal, 2022) as traditional non-neural methods;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Att-GCN+MCTS and NeuralRewriter (Chen & Tian, 2019) as hybrid methods for TSP and CVRP respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' and deep learning-based constructive methods: AM, TransformerTSP, MDAM and POMO, which were discussed in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For all deep learning baselines we use the model trained on graphs of size 100 and the best decoding strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Following the same procedure as in Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2020), we generate four test datasets with graphs of sizes 100, 200, 500 and 1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For CVRP, we use capacities of 50, 80, 100 and 250, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In addition, we report the results on TSPLib instances with up to 4461 nodes and all CVRPLib instances with node coordinates in the Euclidian space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For all models, we report the optimality gap and the inference time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The optimality gap for TSP is based on the optimal solutions obtained with Concorde.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For CVRP, although HGS gives better results than LKH, we use the LKH solution as a reference to compute the ”optimality” gap, in order to be consistent (and easily comparable) with previous works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' While the optimality gap is easy to compute and compare, measurements of running times are much harder: they may vary due to the implementation platforms (Python, C++), hardware (GPU, CPU), parallelization, batch size, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Therefore, we also report the number of solutions generated by each of the constructive deep learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In our experiments, we run all deep learning models on a single Nvidia Tesla V100-S GPU with 24GB memory, and other solvers on Intel(R) Xeon(R) CPU E5-2670 with 256GB memory, in one thread.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Results Tables 1a and 1b summarize our results on TSP and CVRP, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For both problems, our model shows superior generalization on larger graphs, even with the greedy decoding strategy, which generates only a single solution while all others generate several hundreds (and select the best among them).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In terms of running time with greedy decoding, our model is competitive with the POMO baseline, and significantly faster than other models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Beam search decoding with beam size 16 further improves the quality of solutions, but as expected, it takes approximately 16 times longer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Figure 2 shows optimality gap versus running time for our model and other baseline models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Our model clearly outperforms other models in terms of generalization to larger instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The only model that is competitive with ours is Att-GCN+MCTS, but it is 2-15 times slower and is designed for TSP only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In addition to synthetic datasets, we test our model on TSPLib and VRPLib instances, which are of varying graph sizes, node distributions, demand distributions and vehicle capacities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Table 1c shows our model’s strength over MDAM and POMO, even with the greedy decoding strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The effectiveness of our MDP transformation method and the resulting neural architecture is confirmed by the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Thanks to our more principled approach that leads to better state representations and a simpler architecture without a decoder, by generating a single solution, it is able to outperform MDAM (with 250 solutions), which is closest to our model conceptually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Moreover, an ablation study in Appendix D suggests that spending appropriate amounts of compute for each subproblem is a crucial factor in our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 7 CONCLUSION We have presented a flexible framework to derive MDPs that sequentially construct solutions to CO problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Starting from a naive MDP, we introduced a generic transformation using bisim- ulation quotienting, which reduces the state space by leveraging its symmetries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We applied this transformation on the TSP and CVRP, for which we also designed a simple attention-based model, well-suited to the transformed state representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We show experimentally that this combination of state representation, simple model, and training procedure yields state-of-the-art generalization re- sults on diverse benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' While training on relatively small instances allowed us to use imitation learning, our approach and model could be similarly used with reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Finally, we have focused on deterministic CO problems, leaving the adaptation of our framework to stochastic problems as future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 8 Published as a conference paper at ICLR 2023 Table 1: Summary of the experimental results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The bold values represent the best optimality gap (lower is better) and fastest inference time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The underlined cells represent the best ratio between the quality of the solution and the inference time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' #s refers to number of generated solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' #s TSP100 TSP200 TSP500 TSP1000 Concorde 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 38m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 2m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 40m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='5h OR-Tools 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='765% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1h 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='516% 4m 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='891% 31m 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='021% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='4h Att-GCN+MCTS∗ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='037% 15m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='884% 2m 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='536% 6m 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='223% 13m AM bs1024 1024 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='510% 20m 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='176% 1m 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='978% 8m 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='750% 31m TransTSP bs1024 1024 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='456% 51m 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='121% 1m 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='142% 9m 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='215% 37m MDAM bs50 250 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='395% 45m 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='044% 3m 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='878% 13m 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='965% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1h POMO augx8 8N 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='134% 1m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='572% 5s 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='182% 1m 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='603% 10m BQ (ours) greedy 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='540% 1m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='793% 5s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='425% 1m 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='335% 7m BQ (ours) bs16 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='032% 18m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='166% 1m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='682% 15m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='311% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h (a) Experimental results on TSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ∗We could not run Att-GCN+MCTS code on our architecture so we report results from the original paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' #s CVRP100 CVRP200 CVRP500 CVRP1000 LKH 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 30m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='000% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h HGS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='510% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='024% 30m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='252% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='104% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h OR-Tools 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='617% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='700% 30m 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='403% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='559% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h NeuRewriter ∗ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='456% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1h 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='460% 9m 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='051% 32m 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='542% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h AM bs1024 1024 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='180% 24m 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='786% 1m 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='964% 8m 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='410% 31m MDAM bs50 250 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='206% 56m 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='332% 3m 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='994% 14m 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='015% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='4h POMO augx8 8N 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='689% 1m 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='767% 5s 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='575% 1m 141.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='058% 10m BQ (ours) greedy 1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='832% 1m 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='723% 5s 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='429% 1m 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='809% 7m BQ (ours) bs16 16 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='798% 18m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='375% 1m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='817% 15m 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='048% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h (b) Experimental results on CVRP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ∗We could not reproduce the reported results for NeuRewriter, so for CVRP100 we report results from the original paper and for other sizes we report the best result we got.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' MDAM POMO BQ (ours) Size bs50 x8 greedy bs16 <100 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='06% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='42% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='38% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='06% 100-200 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='14% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='31% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='82% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='61% 200-500 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='32% 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='32% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='31% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='07% 500-1K 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='40% 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='58% 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='08% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='04% >1K 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='81% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='61% 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='87% 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='61% All 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='01% 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='30% 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='22% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='94% MDAM POMO BQ (ours) Set (size) bs50 augx8 greedy bs16 A (32-80) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='17% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='86% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='85% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='96% B (30-77) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='77% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='13% 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='04% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='50% F (44-134) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='96% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='49% 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='20% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='04% M (100-200) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='92% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='99% 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='69% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='85% P (15-100) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='44% 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='69% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='71% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='32% X (100-1K) 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='17% 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='62% 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='74% 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='35% All (15-1K) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='36% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='58% 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='58% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='60% (c) Experimental results on TSPLib (left) and CVRPLib (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 10 −4 10 −3 10 −2 10 −1 10 0 Inference time (per instance,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' in seconds) 0 5 10 15 20 25 30 35 40 Optimality gap AM bs1024 MDAM bs50 POMO augx8 Att- GCN+MCTS BQ (ours) greedy BQ (ours) bs16 TSP100 TSP200 TSP500 TSP1000 10 −4 10 −3 10 −2 10 −1 10 0 Inference time (per instance,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' in seconds) 0 20 40 60 80 100 120 140 Optimality gap AM bs1024 MDAM bs50 POMO augx8 NeuralRewriter BQ (ours) greedy BQ (ours) bs16 CVRP100 CVRP200 CVRP500 CVRP1000 Figure 2: Generalization results on different graph sizes for TSP (left) and CVRP (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Lower and further left is better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 9 Published as a conference paper at ICLR 2023 REPRODUCIBILITY STATEMENT In order to ensure the reproducibility of our approach, we have: described precisely our generic theoretical framework (Section ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=') and provided a detailed proof of Proposition 1 in Appendix F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This should in particular serve to adapt the frame- work to other CO problems;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' explained in detail our proposed model (Section 4 for TSP and Appendix A for CVRP), described precisely the training procedure and listed the hyperparameters (Section 6);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' used public datasets referenced in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Furthermore, we plan to make our code public upon acceptance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' REFERENCES David Applegate, Robert Bixby, Vasek Chvatal, and William Cook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Concorde TSP solver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Univer- sity of Waterloo, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Layer Normalization, July 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Thomas Bachlechner, Bodhisattwa Prasad Majumder, Henry Mao, Gary Cottrell, and Julian McAuley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ReZero is all you need: Fast convergence at large depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In Proceedings of the Thirty- Seventh Conference on Uncertainty in Artificial Intelligence, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 1352–1361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' PMLR, December 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Irwan Bello, Hieu Pham, Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Le, Mohammad Norouzi, and Samy Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Neural Combinato- rial Optimization with Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:1611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='09940 [cs, stat], January 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Yoshua Bengio, Andrea Lodi, and Antoine Prouvost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Machine learning for combinatorial opti- mization: A methodological tour d’horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' European Journal of Operational Research, 290(2): 405–421, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Ilhem Boussa¨ıd, Julien Lepagnot, and Patrick Siarry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A survey on optimization metaheuristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Information Sciences, 237:82–117, July 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ISSN 0020-0255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='ins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='041.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Xavier Bresson and Thomas Laurent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The Transformer Network for the Traveling Salesman Prob- lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='03012 [cs], March 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Quentin Cappart, Didier Ch´etelat, Elias Khalil, Andrea Lodi, Christopher Morris, and Petar Veliˇckovi´c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Combinatorial optimization and reasoning with graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='09544 [cs, math, stat], February 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Xinyun Chen and Yuandong Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Learning to Perform Local Rewriting for Combinatorial Opti- mization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' William J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Cook, William H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Cunningham, William R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Pulleyblank, and Alexander Schrijver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Com- binatorial Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Wiley-Interscience, New York, 1st edition edition, November 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ISBN 978-0-471-55894-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Zhang-Hua Fu, Kai-Bin Qiu, and Hongyuan Zha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Generalize a Small Pre-trained Model to Arbitrar- ily Large TSP Instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='10658 [cs], December 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, and Stephan G¨unnemann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness, March 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Robert Givan, Thomas Dean, and Matthew Greig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Equivalence notions and model minimization in Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Artificial Intelligence, 147(1):163–223, July 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ISSN 0004-3702.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1016/S0004-3702(02)00376-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 10 Published as a conference paper at ICLR 2023 Keld Helsgaun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' An Extension of the Lin-Kernighan-Helsgaun TSP Solver for Constrained Travel- ing Salesman and Vehicle Routing Problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Technical report, Roskilde University, Roskilde, Denmark, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Sergey Ioffe and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In Proceedings of the 32nd International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 448–456.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' PMLR, June 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Chaitanya K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Joshi, Thomas Laurent, and Xavier Bresson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' An Efficient Graph Convolutional Net- work Technique for the Travelling Salesman Problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='01227 [cs, stat], June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Chaitanya K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Joshi, Quentin Cappart, Louis-Martin Rousseau, and Thomas Laurent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Learning the travelling salesperson problem requires rethinking generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Constraints, 27(1-2):70–98, April 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ISSN 1383-7133.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='17863/CAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='85173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Learning Combinatorial Opti- mization Algorithms over Graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Guyon, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Luxburg, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Bengio, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Wallach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Vishwanathan, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Garnett (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ), Advances in Neural Information Processing Systems 30, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 6348–6358.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Minsu Kim, Junyoung Park, and Jinkyoo Park.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization, May 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Diederik P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Adam: A Method for Stochastic Optimization, January 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Wouter Kool, Herke van Hoof, and Max Welling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Attention, Learn to Solve Routing Problems!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In International Conference on Learning Representations, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Yeong-Dae Kwon, Jinho Choo, Byoungjip Kim, Iljoo Yoon, Youngjune Gwon, and Seung- jai Min.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' POMO: Policy Optimization with Multiple Optima for Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='16011 [cs], July 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Florian Mai, Arnaud Pannatier, Fabio Fehr, Haolin Chen, Francois Marelli, Francois Fleuret, and James Henderson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' HyperMixer: An MLP-based Green AI Alternative to Transformers, March 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Sahil Manchanda, Sofia Michel, Darko Drakulic, and Jean-Marc Andreoli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' On the Generalization of Neural Combinatorial Optimization Heuristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), Grenoble, France, September 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Nina Mazyavkina, Sergey Sviridov, Sergei Ivanov, and Evgeny Burnaev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Reinforcement Learning for Combinatorial Optimization: A Survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='03600 [cs, math, stat], March 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' MohammadReza Nazari, Afshin Oroojlooy, Lawrence Snyder, and Martin Takac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Reinforcement Learning for Solving the Vehicle Routing Problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Alex Nowak, David Folqu´e, and Joan Bruna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Divide and Conquer Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In 6-Th International Conference on Learning Representations, Vancouver, Canada, April 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Bo Peng, Jiahai Wang, and Zizhen Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A Deep Reinforcement Learning Algorithm Using Dy- namic Attention Model for Vehicle Routing Problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='03282 [cs, stat], February 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Laurent Perron and Vincent Furnon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' OR-tools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Google, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Efficient Transformers: A Survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ACM Computing Surveys, April 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ISSN 0360-0300.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1145/3530811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Gomez, Lukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Attention Is All You Need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='03762 [cs], June 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 11 Published as a conference paper at ICLR 2023 Thibaut Vidal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hybrid genetic search for the CVRP: Open-source implementation and SWAP* neighborhood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Computers & Operations Research, 140:105643, April 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ISSN 0305-0548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='105643.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Pointer Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Cortes, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Lawrence, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Lee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Sugiyama, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Garnett (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ), Advances in Neural Information Pro- cessing Systems 28, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 2692–2700.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Liang Xin, Wen Song, Zhiguang Cao, and Jie Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Multi-Decoder Attention Model with Embed- ding Glimpse for Solving Vehicle Routing Problems, December 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Liang Xin, Wen Song, Zhiguang Cao, and Jie Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Generative Adversarial Training for Neural Combinatorial Optimization Models, September 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Liang Xin, Wen Song, Zhiguang Cao, and Jie Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Step-Wise Deep Learning Models for Solving Routing Problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' IEEE Transactions on Industrial Informatics, 17(7):4861–4871, July 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ISSN 1941-0050.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1109/TII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3031409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Cong Zhang, Wen Song, Zhiguang Cao, Jie Zhang, Puay Siew Tan, and Chi Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='12367 [cs, stat], October 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 12 Published as a conference paper at ICLR 2023 A APPLICATION TO THE CVRP Problem definition and specification The Capacitated Vehicle Routing Problem (CVRP) is a vehicle routing problem in which a vehicle (here, a single one) with limited capacity must deliver items from a depot location to various customer locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Each customer has an associated demand, which represents an amount of items, and the problem is for the vehicle to provide all the customers in the least travel distance, returning as many times as needed to the depot to refill, but without ever exceeding the vehicle capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Formally, we assume given a set of customer nodes, each with a demand (positive scalar), plus a depot node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A CVRP solution (in X) is a finite sequence of nodes starting at the depot, which are pairwise distinct except for the depot, and respecting the capacity constraint: the total demand of any contiguous sub-sequence of customer nodes is below the vehicle capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A CVRP instance (in F) is given by a finite set D of nodes, including the depot, their coordinates in the Euclidian space V , and maps any solution to the length of the corresponding path using the distances in V , if the path visits exactly all the nodes of D, or ∞ otherwise (unfeasible solutions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A possible specification ⟨T , SOL, VAL⟩ for the CVRP is defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The step space T is the set of pairs of a non depot node and a binary flag indicating whether that node is to be reached via the depot or directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The extension ¯t of a step t is either the singleton of its node component if its flag is 0 or the pair of the depot node and its node component if its flag is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For a given problem instance f and sequence t1:n of steps, SOL(f, t1:n) is either the sequence ¯t1:n if it forms a d-path which visits exactly all the nodes of f , or ⊥ otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' VAL(f, t1:n) is either the total length of ¯t1:n (closed at its end) if it forms a d-path which visits only nodes of f (maybe not all), or ∞ otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' It is easy to show that ⟨T , SOL, VAL⟩ forms a specification for the CVRP (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' satisfies the axioms of specifications introduced in Section ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The naive MDP obtained from it is denoted CVRP-MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Bisimulation quotienting Just as with TSP, we can define a mapping Φ from CVRP-MDP states to a new “path-CVRP” state space, informally described by the following diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' C=10 1 1 4 3 2 4 1 1 4 3 CVRP state C=3 1 4 3 path-CVRP state Φ Here, the capacity of the vehicle is C=10, shown next to the (colourless) depot node, and the demand of each node is shown next to it, in orange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The black dotted line indicates that the action which introduced the node with demand 2 was via the depot: its flag was set to 1 (all the other actions had their flag set to 0 in this simple example).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The green dotted line indicates how the path is closed to measure its length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' After the node with demand 2, the path of visited nodes (in red) continues with nodes with demand 4 and 1, respectively, so that the remaining capacity at the end of the path is C−(2+4+1)=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Compared to TSP, this is the additional piece of information in the summary of the “past” (path of visited nodes) which is preserved in the path-CVRP state, together with the origin and destination of the path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Mapping Φ thus defined satisfies Equation 3, hence induces a bisimulation on CVRP-MDP states, and by quotienting, one obtains an MDP which can be defined directly on path-CVRP states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Model architecture for CVRP The model architecture for CVRP is almost the same as for TSP, with a slight difference in the input sequence and in the output layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In the TSP model, the input to the node embedding layer for a N-node state is a 2×N matrix of coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For CVRP, we use two additional channels: one for node demands, and one for the current vehicle capacity, repeated across all nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The demand is set to zero for the origin and destination nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We obtain a 4×N matrix of features, which is passed through a learned embedding layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' As for TSP, a learned origin-signalling (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' destination-signalling) vector is added to the corresponding embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The rest of the architecture, in the form of attention layers, is identical to TSP, until after the action scores projection layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In the case of TSP, the projection layer returns a vector of N scores, where each score, after 13 Published as a conference paper at ICLR 2023 a softmax, represents the probability of choosing the node as the next step in the construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In the case of CVRP, the model returns a matrix of scores of dimension N×2, corresponding to each possible actions (node-flag pair) and the softmax scopes over this whole matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' As usual, a mask is always applied to unfeasible actions before the softmax operator: those which have higher demand than the remaining vehicle capacity, as well as the origin and destination nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' B APPLICATION TO THE KNAPSACK PROBLEM Problem definition and specification The Knapsack Problem (KP) is classical combinatorial op- timization problem in which we need to pack items, with given values and weights, into a knapsack with a given capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The objective is to maximize the total value of packed items.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Formally, we assume given a set of items, each with a value and weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A KP solution (in X) is a subset of the items which respects a capacity constraint (“c-subset”): total weight of the items of the subset must not exceed the knapsack capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A KP instance (in F) is given by a finite set of D items and maps any c-subset to the sum of values of its items.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A simple problem specification ⟨T , SOL, VAL⟩ can be defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The step space T is equal to the set of items, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For a partial solution (f, t1:n), if the selected items satisfy the capacity con- straints and adding any of the remaining items results in an infeasible solution, then SOL(f, t1:n) returns the subset of selected items;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' otherwise it returns ⊥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Finally, VAL(f, t1:n) is either the sum of the values of the items in t1:n if they satisfy the capacity constraint and ∞ otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Similarly to the TSP and CVRP cases, it is easy to show that ⟨T , SOL, VAL⟩ forms a specification for the KP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The naive MDP obtained from it is denoted MDP-KP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Bisimulation quotienting As it was the case for TSP and CVRP, we can define a mapping Φ from KP-MDP state to a new “BQ-KP” state space, informally described by the following diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 3 7 9 1 1 2 4 5 8 8 6 weights values 1 9 2 8 3 7 1 6 7 3 9 C = 20 3 9 1 4 5 8 8 6 1 2 3 1 6 7 3 9 C = 10 KP-state BQ-KP-state Φ Here, capacity of the knapsack is C = 20 and each item is defined by its weight (bottom cell) and value (top cell).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Mapping Φ for KP is straightforward - simply saying, it removes all picked items and update the remaining capacity by subtracting total weight of removed items from the previous capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Model architecture for KP The model architecture for KP is again very similar to previously described models for TSP and CVRP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The input to the model is a 3 × N tensor composed of items properties (values, weights) and the additional channel for the remaining knapsack’s capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' By definition, the solution has no order (the result is a set of items), so there is no need to add tokens for origin and destination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A part from excluding these tokens and different input dimensions, the rest of the model is identical to the TSP model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The output is a vector of N probabilities over all items with a mask over the unfeasible ones (with weights larger than remaining knapsack’s capacity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In the training, at each construction step, any item of the ground-truth solution is a valid choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Therefore we use a multi-class cross-entropy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Experimental results for KP We generate the training dataset as described in Kwon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We train our model on 1M KP instances of size 200 and capacity 25, with values and weights ran- domly sampled from the unit interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We use the dynamic programming algorithm from ORTools to compute the ground-truth optinal solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' As hyperparameters, we use the same as for the TSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Then, we evaluate our model on test datasets with the number of items equal 200, 500 and 1000 and capacity of 25 and 50, for each problem size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Table B shows the performance of our model compared to POMO, one of the best performing NCO models on KP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Although our model does not outperform it in every dataset, it achieves better overall performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' It should be noted again that POMO builds N solutions per instance and choose the best one, while our model generate a single solution per instance but still achieves better results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 14 Published as a conference paper at ICLR 2023 Optimal POMO (single traj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=') POMO (all traj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=') BQ (greedy) value value opt gap value opt gap value opt gap N=200 C=25 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='023 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='740 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='476% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='007 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='017% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='970 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='081% C=50 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='756 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='483 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='544% 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='787 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='170% 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='710 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='056% N=500 C=25 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='986 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='309 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='217% 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='516 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='897% 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='150 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='904% C=50 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='326 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='291% 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='272 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='042% 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='369 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='739% N=1000 C=25 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='692 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='757 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='386% 123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='572 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='973% 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='217 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='808% C=50 182.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='898 170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='920 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='545% 172.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='427 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='724% 175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='093 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='267% All 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='552% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='648% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='980% Table 2: Experimental results on KP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Greedy Beam size 16 Beam size 64 Full graph 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='79% 5s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='17% 1m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='08% 5m TSP200 100KNNs 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='31% 3s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='23% 33s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='10% 3m Full graph 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='71% 1m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='68% 15m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='54% 1h TSP500 100KNNs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='58% 18s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='92% 3m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='69% 12m 250KNNs 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='56% 32s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='67% 9m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='53% 30m Full graph 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='34% 7m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='31% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='19% 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h TSP1000 100KNNs 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='34% 25s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='69% 6m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='45% 24m 250KNNs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='53% 1m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='43% 23m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='19% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='4h Full graph 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='80% 5s 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='42% 1m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='82% 5m CVRP200 100KNNs 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='18% 3s 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='12% 33s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='68% 3m Full graph 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='74% 1m 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='10% 15m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='59% 1h CVRP500 100KNNs 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='14% 18s 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='02% 3m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='74% 12m 250KNNs 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='58% 32s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='86% 9m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='14% 30m Full graph 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='00% 7m 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='19% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='8h 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='39% 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3h CVRP1000 100KNNs 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='25% 25s 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='76% 6m 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='58% 24m 250KNNs 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='51% 1m 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='08% 23m 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='28% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='4h Table 3: Improving the model performance using a k-nearest-neighbor heuristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' C IMPROVING THE MODEL PERFORMANCE WITH A k-NEAREST-NEIGHBOR HEURISTIC Our decoding strategy could be further improved by using a k-nearest-neighbor heuristic to restrict the search space and reduce the inference time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For both greedy and beam search strategies, at every step, it is possible to reduce the remaining graph by considering only a certain number of neighbouring nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Table 3 presents the experiments on TSP and CVRP where we apply the model just on a certain number on nearest neighbours of the origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This approach clearly reduces the execution time, but also in some cases even improves the performance in terms of optimality gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The same heuristic can be applied on Knapsack problem, where model could be applied just on a certain number of items with highest values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' D ABLATION STUDY D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1 TRANSFORMER VS HYPERMIXER AS MODEL In Section 6 we have shown that our model has an excellent generalization ability to graphs of larger size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In Section ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', we hypothesize that this has to do with the fact that a subproblem of size t spends O(t2) computation operations due to the quadratic complexity of the Transformer encoder’s self-attention component, which is responsible for mixing node representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' To test this hypothesis, we experiment with replacing self-attention with an efficient mixing component (see Tay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2022) for an overview), namely the recent linear-time HyperMixer (Mai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We chose this model because it does not assume that the input is ordered, unlike e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' sparse attention alternatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 15 Published as a conference paper at ICLR 2023 Seed TSP100 TSP200 TSP500 TSP1000 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='10% 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='38% 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='91% 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='30% 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='38% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='54% 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='59% 628.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='71% 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='93% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='14% 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='18% 216.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='77% 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='37% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='54% 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='23% 104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='85% 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='25% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='66% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='99% 524.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='43% Table 4: Experimental results on TSP with HyperMixer for five different seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Experimental Details For comparability, we set the model and training parameters to the same as for Transformers, so the experiments only differ in token mixing component that is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The only other difference is that we used Layer Normalization Ba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2016) instead of ReZero Bachlechner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2021), which leads to more stable training for HyperMixer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Since we observed relatively large sensitivity to model initialization, we are reporting the results for 5 different seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Results Table 4 shows the result for HyperMixer with greedy decoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' While the model reaches lower but acceptable performance than Transformers on TSP100, it generalizes poorly to instances of larger size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Moreover, performance is very sensitive to the seed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' These results suggest that the computation spent by self-attention is indeed necessary to reach the generalization ability of our model, which increases the compute with the size of the (sub)problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='2 APPROXIMATED MODEL As mentioned in Section 5, existing works have also noted the importance of accounting for the change of the state after each action: Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 2020) claimed that models should recompute the embeddings after each action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' However because of the additional training cost, they proposed the following approximation: fixing lower encoder levels and recomputing just the top level with a mask of already visited nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' They hypothesis a kind of hierarchical feature extraction property that may make the last layers more important for the fine-grained next decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In contrast, we call our entire model after each construction step;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' effectively recomputing the embeddings of each state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We hypothesis that this property may explain the superior performance (Table 1) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='t MDAM model Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In order to support this hypothesis, we have implemented an approximated version of our model as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We fixed the bottom layers of our model and recomputed just the top layer, by masking already visited nodes and adding the updated information (origin and destination tokens for TSP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' As expected, inference time is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='6 times shorter, but performance is severely degraded: we obtained optimality gap of 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='833% (vs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='540% with original model) on TSP100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3 REZERO VS BATCHNORM AS NORMALIZATION Most NCO works that use transformer networks (Kool et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2019)(Kwon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021)(Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2020) use batch normalization(Ioffe & Szegedy, 2015) rather than layer normalization (Ba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2016) in attention layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We find ReZero normalization (Bachlechner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', 2021) to work even better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Figure 3 shows the effect of using ReZero compared to batch normalization in our Trans- former network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Using it leads to more stable training, better performance, and drastically lower variance between seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' E ON THE IMPACT OF EXPERT SOLUTIONS Our datasets consist of pairs of a problem instance and a solution (tour).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' On the other hand, in this paper, we use imitation learning, which requires instead pairs of a problem instance and (expert) trajectory in the MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Now, a solution may be obtained from multiple trajectories in the MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For example, with TSP, a solution is a loop in a graph, and one has to decide at which node its construc- tion started and in which direction it proceeded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' With CVRP, the order in which the subtours are constructed needs also to be decided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence, all our datasets are pre-processed to transform solu- tions into corresponding construction trajectories (a choice for each or even all possible ones).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We experimentally observed that this transformation has an impact on the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For example, with CVRP, choosing, for each solution, the construction in the order in which LKH3 displays it, 16 Published as a conference paper at ICLR 2023 0 200 400 600 800 1000 Epochs 0 5 10 15 20 Optimality gap BatchNorm, seed 0 BatchNorm, seed 1 ReZero, seed 0 ReZero, seed 1 Figure 3: Training curves showing the effect of the choice of normalization layer on validation performance which does not seem arbitrary, yields to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='3 point better opt-gap performance compared to following a random ordering of the sub-tours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We hypothesize that if there is any bias in the display of the optimal solution - for example, shorter tour first, or closest node first - it requires slightly less model capacity to learn action imitation for this display rather than for all possible displays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' F PROOF OF PROPOSITION 1 (SOUNDNESS OF THE NAIVE MDP) We show here that procedure SOLVE satisfies SOLVE(f)= arg minx∈X f(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We first show the following general lemma: Let Y ψ→X f→R∪{∞} be arbitrary mappings, if ψ is surjective then arg min x∈X f(x) = ψ(arg min y∈Y f(ψ(y))) Simple application of the definition of arg min (as a set).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The subscript ∗ denotes the steps where the assumption that ψ is a surjection is used: x′ ∈ ψ(arg min y f(ψ(y))) iff ∃y′ ∈ arg min y f(ψ(y)) x′ = ψ(y′) iff ∃y′ x′ = ψ(y′) ∀y f(ψ(y′)) ≤ f(ψ(y)) iff ∃y′ x′ = ψ(y′) ∀y f(x′) ≤ f(ψ(y)) iff∗ ∀y f(x′) ≤ f(ψ(y)) iff∗ ∀x f(x′) ≤ f(x) iff x′ ∈ arg min x f(x) Let (F, X) be a CO problem with specification ⟨T , SOL, VAL⟩ and M the naive MDP obtained from it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For each f∈F, let vf=VAL(f, ϵ), Xf={x∈X|f(x)<∞} and let Yf be the set of M-trajectories which start at (f, ϵ) and end at a stop state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 17 Published as a conference paper at ICLR 2023 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Transformer encoder activation = ReLU normalization = ReZero input embedding layer Linear softmax dest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' emb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' origin emb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' + + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Figure 4: Computation flow at the t-th time step, when a partial solution of length t − 1 already exists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The input state consist of the destination node (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' the first and last node in the TSP tour), the origin node (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=', the most recent node in the tour), and the set of remaining nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' After passing all input nodes through an embedding layer, we add special, learnable vector embeddings to the origin and current node to signal their special meaning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Finally, a Transformer encoder followed by a linear classifier head selects the next node at step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For any M-trajectory τ=s0t1r1s1 · · · tnrnsn in Yf, define ψ(τ) =def SOL(sn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Since τ∈Yf, we have s0=(f, ϵ) and sn is a stop state, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' SOL(sn)=ψ(τ)∈X, and by Equa- tion 2a, f(ψ(τ))<∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence ψ:Yf �→ Xf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' By construction, sm=(f, t1:m) for all m∈1:n and each transition in τ has a finite reward VAL(sm−1)−VAL(sm) (condition for it to be valid).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence the cumulated reward is given by R(τ)=VAL(s0)−VAL(sn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Now, VAL(s0)=vf which is independent of τ and by Equa- tion 2c, VAL(sn)=f(ψ(τ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence f(ψ(τ))=vf−R(τ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let’s show that ψ is surjective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let x∈Xf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Equation 2a ensures that x=SOL(f, t1:n) for some t1:n∈T ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For each m∈{0:n}, let sm=(f, t1:m) and consider the sequence τ=s0t1r1s1 · · · tnrnsn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Now, SOL(sn)=x̸=⊥ hence τ ends in a stop state and starts at (f, ϵ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' By Equation 2c we have VAL(sn)=f(x), hence VAL(sn)<∞, and VAL(sm)<∞ for all m∈{0:n−1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' And by Equation 2b SOL(sm)=⊥, hence all the transitions in τ are valid in M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence τ∈Yf and by definition, ψ(τ)=x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Therefore we can apply the lemma proved above: arg min x∈Xf f(x) = ψ(arg min τ∈Yf f(ψ(τ))) = ψ(arg min τ∈Yf vf−R(τ)) = ψ(arg max τ∈Yf R(τ)) = ψ(SOLVEMDP M (f, ϵ)) = SOLVE(f) Now, obviously, arg minx∈X f(x) = arg minx∈Xf f(x), since by definition f is infinite on X\\Xf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 18 Published as a conference paper at ICLR 2023 G PLOTS OF SOME TSPLIB AND CVRPLIB SOLUTIONS (a) Optimal solution (b) Our model (BS16), opt_gap 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='549% (c) MDAM (BS50), opt_gap 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='501% (d) POMO (x8), opt_gap 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='614% Instance pcb442 (a) Optimal solution (b) Our model (BS16), opt_gap 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='253% (c) MDAM (BS50), opt_gap 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='916% (d) POMO (x8), opt_gap 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='664% Instance pr1002 19 Published as a conference paper at ICLR 2023 (a) Optimal solution (b) Our model (BS16), opt_gap 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='464% (c) MDAM (BS50), opt_gap 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='669% (d) POMO (x8), opt_gap 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='416% Instance X-n284-k15 (a) Best known solution (b) Our model (BS16), opt_gap 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='667% (c) MDAM (BS50), opt_gap 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='739% (d) POMO (x8), opt_gap 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='603% Instance X-n513-k21 20 Published as a conference paper at ICLR 2023 H BACKGROUND ON BISIMULATION-BISIMILARITY H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='1 BISIMULATION IN LABELLED TRANSITION SYSTEMS Bisimulation is a very broad concept which applies to arbitrary Labelled Transition Systems (LTS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' It has been declined in various flavours of LTS, such as Process Calculi, Finite State Automata, Game theory, and of course MDP (initially deterministic MDP such as those used here, later extended to stochastic MDP which we are not concerned with here).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A bisimulation is a binary relation R among states which “commutes” with the transitions of the LTS in the following diagram, which should informally be read as follows: if the pair of arrows connected to p (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' q) exists then so does the “opposite” pair (w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' the centre of the diagram).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' p q p′ q′ ℓ ℓ R R The notation p ℓ −−→ p′ means the transition from p to p′ with label ℓ is valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Thus, formally, Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A binary relation R on states is a bisimulation if for all label ℓ and states p, q such that pRq ∀p′ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' p ℓ −−→ p′ ∃q′ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' q ℓ −−→ q′ , p′Rq′ ∀q′ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' q ℓ −−→ q′ ∃p′ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' p ℓ −−→ p′ , p′Rq′ Note that this definition is extended to the “heterogeneous” case where R is bi-partite, relating the state spaces of two LTS L1, L2 sharing the same label space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' One just forms a new LTS L whose state space is the disjoint union of the state spaces of L1, L2 and the transitions are those of L1, L2 in their respective (disjoint) component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' An heterogeneous bisimulation on L1, L2 is then a (homogeneous) bisimulation on L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Most results below also have a heterogeneous version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The set of bisimulations (subset of the set of binary relations on states) is stable by union, composition, and inversion, hence also by reflexive-symmetric-transitive closure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' In particular, the union of all bisimulations, called the bisimilarity of the LTS, is itself a bisimulation, and it is also an equivalence relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (outline) Let’s detail stability by composition, the other cases are similarly obvious.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' If R1, R2 are the two bisimulations being composed, apply the commutation property to each cell of the following diagram (from top to bottom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' p r q p′ r′ q′ ℓ ℓ ℓ R1 R2 R1 R2 Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Given an LTS L, its transitive closure is another LTS denoted L∗ on the same state space, where the labels are the sequences of labels of L and the transitions are defined by p ℓ1:n −−−−→ (L∗) p′ if ∃p0:n such that p = p0 ℓ1 −−−→ (L) p1 · · · ℓn−1 −−−−→ (L) pn−1 ℓn −−−→ (L) pn = p′ Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' If R is a bisimulation on L, then it is also a bisimulation on L∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (outline) This is essentially shown by successively applying the commutation property to each cell of the following diagram (from left to right): 21 Published as a conference paper at ICLR 2023 p0 q0 p1 q1 pn−1 qn−1 pn qn ℓ1 ℓn ℓ1 ℓn R R R R Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Given an LTS L and an equivalence relation R on its state space, we can define the quotient LTS L/R with the same label space, where the states are the R-equivalence classes and the transitions are defined, for any classes ˙p, ˙p′, by ˙p ℓ −−−−→ L/R ˙p′ if ∀p ∈ ˙p ∃p′ ∈ ˙p′ p ℓ −−→ L p′ Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let R be an equivalence on the state space of L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' R is a bisimulation on L if and only if ∈ is a (heterogeneous) bisimulation on L, L/R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' We show both implications: Assume R is a bisimulation on L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' – Let p ∈ ˙q and p ℓ−→ p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let q ∈ ˙q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence pRq and p ℓ−→ p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Since R is a bisimulation, there exists q′ such that q ℓ−→ q′ and p′Rq′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence for all q ∈ ˙q, there exists q′ ∈ ¯p′ such that q ℓ−→ q′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence by definition ˙q ℓ−→ ¯p′ while p′ ∈ ¯p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' – Let p ∈ ˙q and ˙q ℓ−→ ˙q′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence by definition, there exists p′ ∈ ˙q′ such that p ℓ−→ p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Assume ∈ is a (heterogeneous) bisimulation on L, L/R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' – Let pRq and p ℓ−→ p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence p ∈ ¯q and p ℓ−→ p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Since ∈ is a bisimulation, there exists ˙q′ such that p′ ∈ ˙q′ and ¯q ℓ−→ ˙q′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Now q ∈ ¯q, hence, by definition, there exists q′ ∈ ˙q′ such that q ℓ−→ q′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' And p′Rq′ since p′, q′ ∈ ˙q′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' – Let pRq and q ℓ−→ q′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence qRp and q ℓ−→ q′, and we are in the previous case up to a permutation of variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let R be an equivalence relation on the state space of L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' If R is a bisimulation on L, then for any L-state p, L/R-state ˙p and L∗-label ℓ ¯p ℓ −−−−−−→ (L/R)∗ ˙p′ if and only if ∃p′ ∈ ˙p′ p ℓ −−→ L∗ p′ Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Simple combination of Propositions 4 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' R is a bisimulation on L, hence ∈ is a het- erogeneous bisimulation on L, L/R (Proposition 4), hence also a heterogeneous bisimulation on L∗, (L/R)∗ (Proposition 3, heterogeneous version).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' If ¯p ℓ −−−−−−→ (L/R)∗ ˙p′, since p∈¯p and ∈ is a bisimulation, we have p ℓ −−→ L∗ p′ for some p′∈ ˙p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Conversely, if p ℓ −−→ L∗ p′ for some p′∈ ˙p′, since p∈¯p and ∈ is a bisimulation, we have ¯p ℓ −−−−−−→ (L/R)∗ ˙q′ and p′∈ ˙q′ for some ˙q′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Now p′∈ ˙p′∩ ˙q′ hence ˙p′= ˙q′ and ¯p ℓ −−−−−−→ (L/R)∗ ˙p′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='2 BISIMULATION IN DETERMINISTIC MDP Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' An MDP is a pair (L, ⊤) where L is a LTS with label space A × R for some action space A (action-reward pairs denoted a|r) and ⊤ is a subset of states (the stop states).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' It is said to be deterministic if if s a|r1 −−→ s′ 1 and s a|r2 −−→ s′ 2 then r1 = r2 and s′ 1 = s′ 2 22 Published as a conference paper at ICLR 2023 Given an L-trajectory τ, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' a sequence s0a1r1s1 · · · anrnsn where si−1 ai|ri −−−→ si for all i∈{1:n}, its cumulated reward is defined by R(τ)= �n i=1 ri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The generic problem statement of the MDP solution framework is, given an MDP (L, ⊤) and one of its states so, to solve the following optimi- sation: SOLVEMDP((L, ⊤), so) = arg max τ R(τ) | τ is a L-trajectory starting at so and ending in ⊤ This definition of MDP and the standard textbook one coincide only in the deterministic case (in the standard definition, an MDP is deterministic if the distribution of output state-reward pairs for a given input state and allowed action is “one-hot”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The non deterministic case in the definition above does not match the standard definition: it would be wrong to interpret two distinct transitions for the same input state s and action a as meaning that the outcome of applying a to state s is distributed between the two output reward-state pairs according to a specific distribution (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' uniform).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Also, in the problem statement, the objective R(τ) has no expectation, which, with the standard definition, only makes sense in the case of a deterministic MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Similarly, the standard problem statement is expressed in terms of policies rather than trajectories directly, but in the deterministic case, the two are equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Observe that there is a one-to-one correspondence between trajectories in L and transitions in the LTS L∗, so the problem statement can be formulated equivalently as SOLVEMDP((L, ⊤), so) = arg max ℓ R(ℓ) | ∃s ∈ ⊤, so ℓ −−→ L∗ s (4) Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let (L, ⊤) be an MDP and R an equivalence relation on its state space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' (L/R, ¯⊤) is also an MDP, where ¯⊤={¯s|s∈⊤}, and if L is deterministic, so is L/R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' If R is a bisimulation on L preserving ⊤ (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' � s∈⊤ ¯s = ⊤), then for any state so and label ℓ in L∗ we have ∃s ∈ ⊤, so ℓ −−→ L∗ s if and only if ∃ ˙s ∈ ¯⊤, ¯so ℓ −−−−−−→ (L/R)∗ ˙s Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' The second property is a direct consequence of Proposition 5 and the assumption that ⊤ is preserved by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' For the first, assume that L is deterministic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Let ˙s, ˙s1, ˙s2 be L/R states, such that ˙s a|r1 −−→ ˙s1 and ˙s a|r2 −−→ ˙s2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Choose s ∈ ˙s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence, by definition, there exist s1∈ ˙s1 and s2∈ ˙s2 such that s a|r1 −−→ s1 and s a|r2 −−→ s2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Since L is deterministic, we have r1=r2 and s1=s2∈ ˙s1∩ ˙s2, hence ˙s1 = ˙s2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Hence L/R is also deterministic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' Therefore, when R is a bisimulation equivalence on L preserving ⊤, the generic MDP problem statement of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' equation 4 can be reformulated as SOLVEMDP((L, ⊤), so) = SOLVEMDP((L/R, ¯⊤), ¯so) = arg max ℓ R(ℓ) | ∃ ˙s ∈ ¯⊤, ¯so ℓ −−−−−−→ (L/R)∗ ˙s (5) Note that a bisimulation on L preserving ⊤ is simply a bisimulation on the LTS ˙L defined as follows: ˙L has the same state space as L and an additional transition s −→ s for each s∈⊤, where “·” is a distinguished label not present in L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' A bisimulation R on ˙L captures some symmetries of the state space of ˙L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' If R is taken to be the bisimilarity of ˙L, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' the union of all the bisimulations on ˙L, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' the union of all the bisimulations on L preserving ⊤, then it captures all the possible symmetries of the state space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' This should be seen as an asymptotic result, since constructing and working with the full bisimilarity of ˙L is not feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' But Proposition 6 remains valuable as it applies to all bisimulation, not just the maximal bisimulation of ˙L (its bisimilarity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'} +page_content=' 23' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9NE1T4oBgHgl3EQfnwSt/content/2301.03313v1.pdf'}