Problem Solving Agent
decides what to do by finding sequences of actions that lead to desirable states
given no information about the problem other than its definition
search algorithms that have some idea of where to look for solutions
based on the current situation and the agent’s performance measure, is the first step in problem solving.
which actions and states to consider given a goal
takes a problem as input and returns a solution in the form of an action sequence (formulate, search, execute)
the state that the agent start in
a description of possible actions available to the agent. The successor function, given a state, returns
The initial state and the successor function implicitly define the state space (all possible states from the initial state)
test which determines if a given state is the goal state
assigns a numeric cost to each path
the step cost of taking action a to go from state x to state y is denoted by c(x, a ,y)
process of removing detail from a representation is called abstraction
object is to reach a specified goal state, such as the one shown on the right of the figure
Route Finding Problem
is defined in terms of specified locations and transitions along links between them
visit every node (The Traveling Salesperson problem)
Measuring Problem Solving Performance
completeness, optimality, time complexity, space complexity
Maximum number of successors to any node
root node is expandd first, then all the successors of the root node are expanded next and so on. Expands the shallowest unexpanded node
Uniform Cost Search
Expands the node n with the lowest path cost (if all step costs are equal, this is identical to a breadth-first search)
Depth First Search
always expands the deepest node until the node has no successor
same as depth first search but limit the maximum depth allowed (not useful unless the maximum possible depth can be determined)
Iterative Deepening Depth First Search
depth first search but when all the nodes have been expanded and no solution found the depth limit is increased.
two simultaneous searches, one from the initial state and from the goal state
if the agent has no sensors at all, then it could be in one of several possible initial states and each action might therefore lead to one of several possible successor states
if the environment is partially observable or if actions are uncertain, then the agent’s percepts provide new information after each action. Each possible percept defines a contigency that must be planned for.
when the states and actions of the environment are unknown the agent must act to discover them.