Problem Solving Agent
decides what to do by finding sequences of actions that lead to desirable states
Uniformed
given no information about the problem other than its definition
Informed
search algorithms that have some idea of where to look for solutions
Goal Formulation
based on the current situation and the agent's performance measure, is the first step in problem solving.

Problem Formulation
which actions and states to consider given a goal
Search Algorithm
takes a problem as input and returns a solution in the form of an action sequence (formulate, search, execute)
Initial State
the state that the agent start in
Successor Function
a description of possible actions available to the agent. The successor function, given a state, returns
State Space
The initial state and the successor function implicitly define the state space (all possible states from the initial state)
Goal Test
test which determines if a given state is the goal state
Path Cost
assigns a numeric cost to each path
Step Cost
the step cost of taking action a to go from state x to state y is denoted by c(x, a ,y)
Abstraction
process of removing detail from a representation is called abstraction
n-puzzle
object is to reach a specified goal state, such as the one shown on the right of the figure
Route Finding Problem
is defined in terms of specified locations and transitions along links between them
Touring Problem
visit every node (The Traveling Salesperson problem)
Measuring Problem Solving Performance
completeness, optimality, time complexity, space complexity
Branching Factor
Maximum number of successors to any node
Breadth-First Search
root node is expandd first, then all the successors of the root node are expanded next and so on. Expands the shallowest unexpanded node
Uniform Cost Search
Expands the node n with the lowest path cost (if all step costs are equal, this is identical to a breadth-first search)
Depth First Search
always expands the deepest node until the node has no successor
Depth-Limited Search
same as depth first search but limit the maximum depth allowed (not useful unless the maximum possible depth can be determined)
Iterative Deepening Depth First Search
depth first search but when all the nodes have been expanded and no solution found the depth limit is increased.
Bidirectional Search
two simultaneous searches, one from the initial state and from the goal state
Sensorless Problems
if the agent has no sensors at all, then it could be in one of several possible initial states and each action might therefore lead to one of several possible successor states
Contingency Problems
if the environment is partially observable or if actions are uncertain, then the agent's percepts provide new information after each action.

Each possible percept defines a contigency that must be planned for.

Exploration Problems
when the states and actions of the environment are unknown the agent must act to discover them.