Introduction to Artificial Inteligence

Assignment 5


Programming assignment - Decision-making under uncertainty

Harassed Citizen Problem

Goals

Sequential decision making under uncertainty using belief-state MDP for decision-making: the Harassed Citizen problem. (This is an obscure variant of the Canadian traveler problem.)

Harassed Citizen Problem - Domain Description

The domain description is similar to that described in assignment 1, except that again we do not know the locations of the blocakges and keys, similar to assignment 4. For simplicity, however, we will assume that the blockages and keys appear independently, with a known given probability, at each vertex. They are revealed with certainty when the agent reaches a neigbouring vertex.

Thus, in the current problem to be solved, we are given a weighted undirected graph, where each vertex has a known probability of having each key type and each bloackage type. These distributions are mutually independent. The agent's only action are traveling between vertices. Traversal costs are the weight of the edge. Also for simplicity, we will assume only one agent, starting at s, and only one goal at vertex t, The problem is to find a policy that brings the agent from s to t, that has minimal expected cost. You may assume that there is at least one path from s to t that is certain to be possible, we will model this by adding a high-cost edge between s and t (see example).

The graph can be provided in a manner similar to previous assignments, for example:

#V 5    ; number of vertices n in graph (from 1 to n)

#B 1          ; Number of different blockage IDs ([1] in this case)

#V 3 K 1 0.4  ; Vertex 3, probability of key (resource) ID 1 is  0.4
#V 4 B 1 0.2  ; Vertex 4, probability of blockage ID 1 is  0.2

etc.

#E 1 2 W3   ; Edge from vertex 1 to vertex 2, weight 3
#E 2 3 W2   ; Edge from vertex 2 to vertex 3, weight 2
#E 3 4 W3   ; Edge from vertex 3 to vertex 4, weight 3
#E 4 5 W1   ; Edge from vertex 4 to vertex 5, weight 1
#E 2 4 W4   ; Edge from vertex 2 to vertex 4, weight 4
#E 1 5 W100 ; Edge from vertex 1 to vertex 4, weight 100

#Start 1
#Goal 5

The start and goal vertices, should be determined via some form of user input (as in the file, or querying the user). For example, in the above graph the start vertex is 1, and the goal vertex is 4.

Solution method

The Canadian traveller problem is known to be PSPACE-complete (and it is likely that the Harassed Citizen traveler problem is also PSPACE complete) so you will be required to solve only very small instances. We will require that the entire belief space be stored in memory explicitly, and thus impose a limit of at most 10 vertices with 1 possible key and blockage ID at most. Your program should initialize belief space value functions and use a form of value iteration (discussed in class) to compute the value function for the belief states. Maintain the optimal action during the value iteration for each belief state, so that you have the optimal policy at convergence.

Requirements

Your program should read the data, including the parameters (start goal vertices). You should construct the policy, and present it in some way. Provide at least the following types of output:

  1. A full printout of the value of each belief-state, and the optimal action in that belief state, if it exists. (Print something that indicates so if this state is irregular, e.g. if it is unreachable).
  2. Run a sequence of simulations. That is, generate a graph instance (blockage and key locations) according to the distributions, and run the agent through this graph based on the (optimal) policy computed by your algorithm. Display the graph instance and sequence of actions. Allow the user to run additional simulations, for a newly generated instance in each case.

Deliverables

  1. Source code and executable files of programs.
  2. Explanation of the method employed in your algorithm.
  3. Non-trivial example runs on at least 2 scenarios, including the input and output.
  4. Submit makefile and short description on how to run your program.

Official deadline: January 23, 2017