Introduction to Artificial Inteligence

Assignment 3


Homework assignment - simulators, agents, search, games, logic, inference



1)  (agents and environment simulators - 5%):
   For each of the following domains, which type of agent is appropriate
   (table lookup, reflex, goal based, or utility based). Also, indicate the
   type of environment:
   a) An agent that plays Whist.
   b) An agent that can solve Sokoban problems.
   c) An autonomous humanoid robot that can win the DARPA robotics challenge (driving a car,
      walking across debris, connecting two pipes). Scoring depends on success in the tasks,
      on execution speed, and on using as little communication with the operator as possible.
   d) An internet shopping agent specializing in trip planning.
   e) An agent that can play Go.

2) (Search - 20%):
   Show the run of the following algorithms on the setting of question 6
   below, where the agent starting at V0 needs to save as many people as possible
   as quickly as possible before the deadline of T=9, where costs are as defined in assignment 1,
   with K=1.
   Assume that h(n) is the "half star" heuristic defined in class (based on number of
   people in the largest group that cannot be saved if considered in isolation).

   a) Greedy search (f(n) = h(n)), breaking ties in favor of states where the
      higher node number.
   b) A* search, (f(n) = g(n)+h(n)), breaking ties as above.
   c) Simplified RTA* with a limit of 2 expansions per real action, breaking ties as above.
   d) Repeat a-c but using h'(n) = 2*h(n) as the heuristic. Is h'(n) admissible?

3) (Game trees 10%):
   Consider a 3-person game (A, B, C) with complete information and STOCHASTIC environment.
   Suppose A has 5 possible moves. Provide a game tree example that shows that for
   each of the following set of rules, A will have a different optimal move:
   a) Each agent out for itself, and they cannot communicate.
   b) Paranoid assumption: B and C are against A
   c) B and C may agree on a deal if it is beneficial to both of them
   d) A and C are partners aiming to maximize the sum of their scores
   e) A is a firm believer in Murphy's laws: "mother nature is a b*!=h", and wants to
      play absolutely safe.

   Explain your answer by showing the computed values in the internal nodes in each case.
   Note that you should probably make the example more concise by allowing
   only ONE choice of action for some of the players, or only ONE possible "stochastic" outcome
   in some of the branches.


4)  (Game-tree search - alpha-beta pruning - 15%)
   In the adversarial version of the hurricane evacuation problem:
   a) Construct an example where the optimal first move is no-op.
   b) Construct and show an example (including the static heuristic evaluation function and
      search tree), with a tree depth of at least 4 ply.
   c) Show where alpha-beta pruning can decrease search effort in the tree from b.


5) (Propositional logic - 10%)
   For each of the following sentences, determine whether they are
   satisfiable, unsatisfiable, and/or valid. For each sentence
   determine the number of models (over the propositional
   variables in the sentence). 
   Note: some items seem hard, but if you use some (natural) intelligence, easier than it may seem.
   a) (not A and not B and not C and not D and not E and F) or (A and B and C and D and E)
   b) (A or B or C or D or E or F) and (not A or not B or not C or not D or not E or not F)
   c) (A or B and (D or not A) and (E or A) -> (B or C and (not D or E))) and (A and not A)
   d) (A and (A -> B) and (B -> C)) ->  C
   e) not ((A and (A -> B) and (B -> C)) -> C)
   f) (A -> not A) or (not A -> A)
 
   Bonus question: in cases a, b, c, also trace the run of the DPLL algorithm 
   for satisfiability with this formula as input (i.e. explain any recursive calls and 
   cases encountered).

6) (FOL and situation calculus - 40%) 
   We need to represent and reason within the scenario defined by assignment 1.
   For simplicity, we will ignore the issue of action costs, and assume only one agent,
   and also K=0 for computing the time/cost of an action.
   
   a) Write down the axioms for the behavior of the world using FOL and
      situation calculus.

      Use the predicates:
        Edge(e, v1, v2) ; e is an edge in the graph between v1 and v2
        Shelter(v)      ; Vertex v is contains a Hurricane shelter
        Weight(e,w)     ; Weight of edge e is w
        Deadline(t)     ; The end of the world is at time t
      Also use the fluents (situation-dependent predicates)
        Loc(v,s)        ; Agent is at vertex v in situation s
        Carrying(n,s)   ; Agent is carrying n people in situation s.
        Safe(n,s)       ; In situation s, n people are safe
        At(v,n,s)       ; There are n people at vertext v in situation s
        Time(t,s)       ; The time since creation is t in situation s

     Constants: as needed, for the edges and vertices, and simple actions (noop).

     Functions: 
         traverse(e)     ; Denotes a traverse move action across edge e
     and of course the "Result" function:
         Result(a,s)    ; Denotes the situation resulting from doing action a in
                        ; situation s.
     You may wish to add auxiliary predicates or fluents in order to simplify the axioms.
     You are allowed to do so provided you explain why they are needed.


   Let the facts representing the scenario be:
Edge(E1, V0, V1)
Edge(E2, V0, V2)
Edge(E3, V2, V3)
Edge(E4, V1, V3)
Weight(E1,1)
Weight(E2,2)
Weight(E3,1)
Weight(E4,1)
Deadline(9)
Shelter(V0)

; with situation S0 fluents being (note that this is a partial, "closed world assumption" description).
Loc(V0, S0)
Carrying(0, S0)
At(V3, 1, S0)
At(V1, 9, S0)
Safe(0, S0)
Time(0, S0)
b) Now, we need to find a sequence of actions that will result some people being safe, starting from S0 (which you found in question 1 above), and prove a theorem that they (the people from V3) are safe as a result (but do not need to prove optimality). Do this by the following steps: A) Convert the knowledge base into conjunctive normal form. (This is very long, so you may omit the parts not needed in the proof below). B) Formally list what we need to prove (a theorem), and state its negation. C) Use resolution to reach a contradiction, thus proving the theorem. At each step of the proof, specify the resolvents and the unifier. c) Would there be a potential problem in the proof if we did not have "frame axioms" in our representation (i.e. stating what did not change), and if so, what? d) Would we be able to do the proof using only forward chaining? Justify all answers shortly!

Deadline: (16:00), Tuesday, November 27, 2018 ( strict deadline).

Submissions are to be solo.