Escaping Local Optima with Penalties in Distributed Iterative Improvement Search
Muhammed Basharu, Ines Arana, and Hatem Ahriz

Abstract

The advantages offered by iterative improvement search make it a popular technique for solving problems in centralised settings. However, the key challenge with this approach is finding effective strategies of dealing with local optima. Such strategies must push the algorithm away from the plateaux in the objective landscape and prevent it from returning to those areas. A wide variety of strategies have been proposed for centralised algorithms, while the two main strategies in distributed iterative improvement remain constraint weighting and stochastic escape. In this paper, we discuss the two phased strategy employed in Distributed Penalty Driven Search (DisPeL) an iterative improvement algorithm for solving Distributed Constraint Satisfaction problems. In the first phase of the strategy, agents try to force the search out of the local optima by perturbing their neighbourhoods; and use penalties, in the second phase, to guide the search away from plateaux if perturbation does not work. We discuss the heuristics that make up the strategy and provide empirical justification for their inclusion. We also present some empirical results using random non-binary problems to demonstrate the effectiveness of the strategy.