Introduction to Artificial Inteligence

Assignment 3


Homework assignment - simulators, agents, search, games, logic, inference



1)  (agents and environment simulators - 5%):
   For each of the following domains, which type of agent is appropriate
   (table lookup, reflex, goal based, or utility based). Also, indicate the
   type of environment:
   a) An agent that plays Poker.
   b) An agent that can play Soduko (easy), and Sudoku (hard).
   c) An autonomous humanoid robot that can win the DARPA robotics challenge (driving a car,
      walking across debris, connecting two pipes). Scoring depends on success in the tasks,
      on execution speed, and on using as little communication with the operator as possible.
   d) An internet shopping agent specializing in cheap flights.
   e) An agent that can play Go.

2) (Search - 20%):
   Show the run of the following algorithms on the setting of question 6
   below, where the agent starting at V0 needs to reach the goal vertex G with minimal cost,
   where costs are as defined in assignment 1.
   Assume that h(n) is the cost to reach the goal state in the simplified problem 
   that assumes that the agent gets all the keys for free.

   a) Greedy search (f(n) = h(n)), breaking ties in favor of states where the
      higher node number.
   b) A* search, (f(n) = g(n)+h(n)), breaking ties as above.
   c) Simplified RTA* with a limit of 2 expansions per real action, breaking ties as above.
   d) Repeat a-c but using h'(n) = 2*h(n) as the heuristic. Is h'(n) admissible?

3) (Game trees 10%):
   Consider a 4-person game (A, B, C, D) with complete information and deterministic environment.
   Suppose A has 5 possible moves. Provide a game tree example that shows that for
   each of the following set of rules, A will have a different optimal move:
   a) Each agent out for itself, and they cannot communicate.
   b) Paranoid assumption: B and C are against A
   c) B and C may agree on a deal if it is beneficial to both of them
   d) A and C are partners aiming to maximize the sum of their scores
   e) A and B may reach a deal, if it is beneficial to both of them

   Explain your answer by showing the computed values in the internal nodes in each case.
   Note that you should probably make the example more concise by allowing
   only ONE choice of action for some of the players in some of the branches.


4)  (Game-tree search - alpha-beta pruning - 15%)
   a) Give an example of a 4-ply game tree (branching factor 2)
      where alpha-beta pruning saving is maximal. How many nodes are pruned?
   b) Suppose that we know ahead of time that the terminal values can only
      be integers between -1 and 1. Is there a case where alpha-beta can save even
      more search than the best case above (show it, or prove that it cannot help).
   c) Provide an example of a 2-ply + 1 chance nodes level game tree where
      one can apply an adapted alpha-beta to prune some nodes. Then change the probailities
      so that no pruning is possible.


5) (Propositional logic - 10%)
   For each of the following sentences, determine whether they are
   satisfiable, unsatisfiable, and/or valid. For each sentence
   determine the number of models (over the propositional
   variables in the sentence). 
   Note: some items seem hard, but if you use some (natural) intelligence, easier than it may seem.
   a) (not A and not B and not C and not D and not E) or (A and B and C and D and E)
   b) (A or B or C or D or E) and (not A or not B or not C or not D or not E)
   c) (A or B and (D or not A) and (E or A) -> (B or C and (not D or E)) and (A and not A)
   d) (A and (A -> B) and (B -> C)) -> not C
   e) not ((A and (A -> B) and (B -> C)) -> C)
   f) (A -> not A) and (not A -> A)
   g) (A and (A -> B) and (B -> C)) -> C

   Bonus question: in cases a, b, c, also trace the run of the DPLL algorithm for satisfiability with this
   formula as input (i.e. explain any recursive calls and cases encountered).

6) (FOL and situation calculus - 40%) 
   We need to represent and reason within the scenario defined by assignment 1.
   For simplicity, we will ignore the issue of action costs, and assume only one agent.
   
   a) Write down the axioms for the behavior of the world using FOL and
      situation calculus.

      Use the predicates:
        Edge(e, v1, v2) ; e is an edge in the graph between v1 and v2.
        Goal(v)         ; Vertex v is a goal site.
        Weight(e, w)    ; The weight of edge e is w.
      Also use the fluents (situation-dependent predicates)
        Loc(v,s)        ; Agent is at vertex v in situation s
        Carrying(k,s)   ; Agent is carrying key of type k in situation s.
        Lock(v,k,s)     ; Lock of type k at vertex v in situation s.
        Key(v,k,s)      ; Key of type k at vertex v in situation s.

     Constants: as needed, for the edges and vertices, and simple actions (noop).

     Functions: 
         traverse(e)     ; Denotes a traverse move action across edge e (may be ineffective)
     and of course the "Result" function:
         Result(a,s)    ; Denotes the situation resulting from doing action a in
                        ; situation s.
     You may wish to add auxiliary predicates or fluents in order to simplify the axioms.
     You are allowed to do so provided you explain why they are needed.


   Let the facts representing the scenario be:
Edge(E1, V0, V1)
Edge(E2, V0, V2)
Edge(E3, V2, G)
Edge(E4, V1, G)
Weight(E1,1)
Weight(E2,1)
Weight(E3,1)
Weight(E4,4)
Goal(G)

; with situation S0 fluents being (note that this is a partial, "closed worlf assumption" description).
Loc(V0, S0)
Lock(V2, Fifi, S0)
Key(V1, Fifi, S0)
b) Now, we need to find a sequence of actions that will result in reaching the goal optimally from S0 (which you found in question 1 above), and prove a theorem that it will reach the goal (but do not need to prove optimality). Do this by the following steps: A) Convert the knowledge base into conjunctive normal form. (This is very long, so you may omit the parts not needed in the proof below). B) Formally list what we need to prove (a theorem), and state its negation. C) Use resolution to reach a contradiction, thus proving the theorem. At each step of the proof, specify the resolvents and the unifier. c) Would there be a potential problem in the proof if we did not have "frame axioms" in our representation (i.e. stating what did not change), and if so, what? d) Would we be able to do the proof using only forward chaining? Justify all answers shortly!

Deadline: (16:00), Tuesday, December 13, 2016 ( strict deadline).

Submissions are to be solo.