Course Home
Administration
Syllabus
References
Lecture Notes
Readings
Assignments
Projects
Resources


Comments? Idea?























































































































































































































































































































































BnB vs label Relaxation

Final project by

Dima Vingurt

vinguzt@post.bgu.ac.il


Aim of the project.

Implement and test relaxation labeling (RL) algorithm on constrain optimization problems (COP). Branch and bound (BnB) algorithm used as reference for best/optimal solution.

CSP and WCSP problems.

Constraint satisfaction problems (CSP) are problems defined as a set of objects whose state must satisfy a number of constraints. CSP represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. The boolean satisfiability problem (SAT) can be CSP. "Real life" examples include planning and resource allocation.

BnB

In order to solve CSP we can apply backtracking (BT) algorithm. BT is a general algorithm for finding all (or some) solutions to some computational problems, notably CSP, that incrementally builds candidates to the solutions, and abandons each partial candidate ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution. BT can be easily visualized as search on the problem tree. Important to add, that variables have order, which doesn’t change.

Relaxation Labeling (RL)

Another way to solve COP problems is to try local search algorithms. RL is such algorithms. In order to solve COP with RL we need to define set of objects, labels, compatibility, support function and update rule. Set of objects will be variables and labels will be values. It is logically to assume compatibility as constrains. rij( λ, μ)- will be the price of constrain variable i with value and variable j with value . In order to define support we have to remember, that we are looking for local minimum of price:

Where p j is probability of variable j to get value μ. Update rule:

Implementation

Problem

The WCSP is generated as instance of java class Problem. Each problem consist of n variables with domain size d and constrain matrix cons [][][][]. Cons[i][j][k][l] is represents rij( λ, μ). In order to generate constrain matrix we need two parameters p1 and p2:
  • P1- density: chance of constrain between variable i and j.
  • P2- tightness: chance of pair: chance of constrain between some values of variables i and j. In addition the problem have max cost (price) parameter (mc) - constrain was random number between 1 and mc. mc was constant in this work, and set to 50. Zero constrain means no constrain.

    BnB

    BnB implemented as class BnB. It is implemented as described at reference. It returns the price of best assignment.

    RL

    RL implemented as class RelaxLabel, which gets the WCSP problem and number of iteration –k. It performs initialization- generation of random probability for each value for each variable (two dimensional matrix). Then for k iteration it calculates support and update probability as shown in above. Then assign values for variables with highest probability and calculate the total price.

    Experiments and results.

    In order to test RL WCSP was generate with same p1 and p2 parameter. P1 =p2 {0.6, 0.7, 0.8, 0.9}. It is know from previous research , which was conducted as part of “Constrain satisfaction” course, that below 0.6 we have low density constrain matrix – optimal solution almost always have price 0. RL perform quite well at those conditions at most of the time it finds the assignments with 0 total prices. That’s why focus was on the high density constrain matrix. Other important parameters of WCSP problem are number of variables –n and variable’s domain size-d. WCSP was generated as set of 50 problems with same n and d. Each problem solved:

    1. BnB – to get the optimal price.
    2. RL algorithm with 5 iterations. 10 runs in order to produce difference initial probabilities.
    3. RL algorithm with 50 iterations. 10 runs in order to produce difference initial probabilities.

    We can see the results as average difference between BnB results and RL runs (the x-axis). Figure above shows the results for all problems with varying n(number of varibles) and d(domain size). As we can see, size of the problem has devastating effect on RL. Generally speaking any local search heavily depends on start condition, and usually collapses to the local minimum. We can conclude, that for small problems RL will find optimal solution. The number of iteration will improve the solution. This conclusion lead us to next experiment: solving heavy WSCP with increasing number of iterations. The results of this experiment are shown on below.

    In order to test dependency on iteration number we choose most heavy problem: p1=p2 = {0.8, 0.9} and n=d=10. From figure above we can see that deference between 5 and 50 iterations are increasing with p1 and p2. Also we can see that increasing number of iteration does help to improve optimal solution, but at some point RL will stuck at local minimum: the deference between 100 and 500 iterations are neglected.

    Conclusions

    RL are local search algorithm, which can be successfully applied on WSCP problems in order to find optimal solution. The level of success depends on difficulty level of the problem itself:

  • N and d parameters have great impact on optimization level of RL. But also it will increase the price of BnB run greatly. May be for some conditions it will be useful to use “cheap” and non-accurate method, than to stuck with BnB run.
  • P1 and P2 parameters define the density of constrain matrix. When those parameters are above 0.6 there is no solution for CSP problem- no solution with price 0. Also we can say that number of iteration for stabilization of RL on local minimum is independent on p1 and p2. In addition we can conclude that for heave WSCP problems RL will not be able to find optimal solution even for high number of iterations. But average difference between optimal solution and RL solution will be in order of mc parameter.

    Additional Information

    Source code implemented in Java.

    References

  • Lecture notes “Computational and Biological Vision” by Ohad Ben-Shahar , 2014
  • Lecture notes “Constraints Processing ” by Amnon Meisels, 2014
  • Javier Larrosa ,Thomas Schiex , “Solving Weighted CSP by Maintaining Arc Consistency”,2004, Artificial Intelligence, 159,issues 1-2, p. 1-26.