1) (FOL and situation calculus)
We need to represent and reason within the scenario defined by assignment 1.
For simplicity, we will ignore the issue of action costs, assume only one agent, and allow
only the drive and pickup actions.
a) Write down the axioms for the behavior of the world using FOL and
situation calculus.
Use the predicates:
Edge(e, v1, v2) ; e is an edge in the graph between v1 and v2.
Goal(ag, v) ; Agent ag's goal is vertex v.
Also use the fluents (situation-dependent predicates)
Loc(v,s) ; Agent is at vertex v in situation s
Fire(e,s) ; Edge e is on fire in situation s
Carrying(s) ; Agent is carrying water in situation s
Water(v,s) ; Vertex v has water in situation s
Constants: as needed, for the edges and vertices, and of course the pickup action
pickup ; Denotes attempting to pick up water
Functions: not needed here, except to denote actions:
drive(e) ; Denotes a move action across edge e (may be ineffective).
and of course the "Result" function:
Result(a,s) ; Denotes the situation resulting from doing action a in
; situation s.
You may wish to add auxiliary predicates or fluents in order to simplify the axioms. you
are allowed to do so provided you explain why they are needed.
Add the facts representing the following scenario in situation S0:
Edge(E1, V0, V1)
Edge(E2, V1, V2)
Edge(E3, V3, V1)
Edge(E4, V0, V3)
Edge(E5, V3, V2)
; Situation S0 fluents
Loc(V0, S0)
Water(V0, S0)
Fire(E1)
Fire(E5)
b) Now, we need to find a sequence of actions that will result in reaching the goal,
of agent A1 being in V1,
and prove a theorem that it will do so. Do the following steps:
A) Convert the knowledge base to conjunctive normal form.
B) Formally list what we need to prove, and state its negation.
C) Use resolution to reach a contradiction, thus proving the theorem.
c) What would be a potential problem in the proof if we did not
have "frame axioms" in our representation (i.e. stating what
did not change)?
d) Would we be able to do the proof using only forward chaining?
2) (agents and environment simulators)
For each of the following domains, which type of agent is appropriate
(table lookup, reflex, goal based, or utility based). Also, indicate the
type of environment:
a) An agent that plays Poker.
b) An agent that can play Spider solitaire.
c) An autonomous humanoid robot that can win the DARPA robotics challenge (driving a car,
walking across debris, connecting two pipes). Scoring depends on success in the tasks,
on execution speed, and on using as little communication with the operator as possible.
d) An internet coordination agent (sets meetings between humans with conflicting goals).
e) An agent that can solve a Soduku puzzle.
f) An agent that plays Go.
3) (Propositional logic)
For each of the following sentences, determine whether they are
satisfiable, unsatisfiable, and/or valid. For each sentence that
is satisfiable, determine the number of models (over the propositional
variables in the sentence). In case b, also trace the run of the DPLL algorithm for satisfiability with this
formula as input (i.e. explain any recursive calles and cases encountered).
a) (A and (A -> B) and (B -> C)) -> C
b) (A -> not A) and (not B -> B)
c) (A or B or C or D or E or F or G)
d) (A and B) or (C or not D or E)
e) (A and (A -> B) and (B -> C)) -> not C
f) not ((A and (A -> B) and (B -> C)) -> C)
4) (Search):
Show the run of the following algorithms on the setting of question 1
above, where the agent starting at V0 needs to reach V2 wth minimal cost.
The costs are: Cpickup=2, and edge weights are: E5 costs 1,
E3 costs 5, E1 and E2 and E4 cost 2.
Assume that h(n) is the cost to reach V2 assuming no fires.
a) Greedy search (f(n) = h(n)), breaking ties in favor of states where the
agent is not carrying water, and then in favor of lower-numbered edges.
b) A* search, (f(n) = g(n)+h(n)), breaking ties as above.
c) Simplified RTA* with a limit of 2 expansions per real action, breaking ties as above.
c) Repeat a-c but using h'(n) = 2*h(n) as the heuristic. Is h'(n) admissible?
5) (Game trees and Search):
Consider a 3-person game (A, B, C) with complete information, where A and C are fully
cooperating: the score of for A and C together is the
NEGATIVE of the scores for B.
For example, if A scores 1, C also scores 1, and B scores -1.
However, C is a player that plays completely randomy with
a uniform distribution. Assume that players take turns in the order A, B, C,
repeatedly, until the game ends.
a) Write an algorithm for optimal play by player A.
b) If the game now is partial information: A cannot communicate with C,
and C cannot see the moves made by A, but
A can see all moves made by C, is your algorithm
still optimal? In what cases, if any, can you run into problems?
c) Same as in case b, but now C can see A's moves, but A cannot see the moves made by C.
d) Show an example 3-ply game tree with branching factor 2, and show
the values computed in this tree in case a.
6) (Game-tree search - alpha-beta pruning)
a) Give an example of a 3-ply game tree (branching factor 2)
where alpha-beta pruning saves NOTHING, and another where the savings
is maximal. How many nodes are pruned in each case?
b) Suppose that we know ahead of time that the terminal values can only
be integers between -2 and 2. Is there a case where alpha-beta can save even
more search than the best case above (show it, or prove that it cannot help).
c) Provide an example of a 2-ply + 2 chance node level game tree where
one can apply an adapted alpha-beta to prune nodes,
and a similar example where changing the distribution on the chance node
edges results in no savings for pruning.
Justify all answers shortly!
Deadline: Noon (12:00), Tuesday, December 18, 2013 ( strict deadline).
Submissions are to be solo.