CS 171 Artificial Intelligence (Russell & Norvig)

AI
Study of systems that:
– think like humans
– act like humans
– think rationally
– act rationallu
Turing test
Test for intelligent behavior

System providing answer passes the test if interrogator can’t tell whether the answers come from a person or not

Think like humans
System that can:
Formulate a theory of mind/brain
Express the theory in a computer program
Cognitive science and psychology
Approach to creating a system that thinks like a human by testing or predicting the response of human subjects
Cognitive neuroscience
Approach to creating a system that thinks like a human by observing neurological data
Think rationally
System that can solve problems using “laws of thought” (syllogisms, notation and logic, etc.)
Rational
Ideal intelligence (in contrast with human intelligence)
Act rationally
System that carries out actions to achieve the best outcome
Agent
Anything that perceives and acts on its environment
AI
Study of rational agents
T
Rational agents carry out an action with the best outcome after considering past and current percepts (T/F)
a = F(p)
p = current percept
a = action carried out
F = agent function
Agent function
Function that maps from percept histories to actions
f = P* ➡ A
T
Agent = architecture + program (T/F)
Performance measure
P in PEAS
Captures agent’s aspiration
Environment
E in PEAS
Context, restrictions
Actuators
A in PEAS
Indicates what the agent can carry out
Sensors
S in PEAS
Indicates what the agent can perceive
Fully observable
Environment where everything an agent requires to choose its actions is available to it via its sensors

vs. Partially observable

Deterministic
Environment that is predictable, follows a sequence
Ex. In a sequence of 1,2,3,4,5, 5 happened because of 1-4

vs. Stochastic

Stochastic
Environment where events that occur now may be direct consequence of past events
Episodic
Environment where choice of current action is not dependent on previous actions

vs. Sequential

Sequential
Environment where all previous choices are taken into account. Current choose will affect future actions

vs. Episodic

Static
Environment that does not change

vs. Dynamic

Dynamic
Environment that changes

vs. Static

Discrete
Environment where past events do not affect what happens next

vs. Continuous

Continuous
Environment that is like a sequential environment

vs. Discrete

Single agent
Agent operating by itself VS multiagent – many agents working together
Reflex agent
Agent given percept and stimulus, will respond (given A, will give B)
Reflex agent with state
Reflex agent knows its state
Goal based agent
Agent that has a goal and makes choices to improve its state
Utility based agent
Agent that also considers a “happiness factor” aside from goal state
Learning agent
Agent with a performance element. Its learning element modifies performance element