COGS/PHIL 3750: Philosophy of Artificial Intelligence (Ch. 2)

What is intelligence
Intelligence can be defined in various ways:
∙ they can creatively, abstractly
∙ reasoning, using logic
∙ planning, solving problems
∙ combining ideas and productive ways
like many steam. Concepts, and may not have a single, brief definition
∙ but maybe we can recognize it when we see it?
Thomas Hobbes on intelligence
∙ thinking or reasoning is a kind of calculation, combining ideas together according to rules.

“For ‘reason’ in this sense is nothing but ‘ ‘reckoning’, that is adding and subtracting, of the consequences of general names agreed upon for the ‘marking’ and ‘signifying’ of our thoughts” (Thomas Hobbes).

Logical reasoning
∙ this type of reasoning is evident in simple logical or deductive arguments:
1) if it’s raining, and my soccer match will be canceled.
2) if my soccer matches canceled, then either I will work on my paper or remember
3) but I cannot work on my papers since I don’t have my laptop
4) it is raining, sorry my book
logical arguments
∙ logical arguments are valid when they had the right form (content doesn’t matter):
if P, then Q.
if Q, than either R or S.
Not R, since T.
P, therefore S.
what about artificial intelligence
∙ if (human or natural) intelligence is based on using such rules, can this be done artificial?
∙ could we design a machine that can carry out such operations?
∙ What are the bare minimum requirements for such a machine?
∙ what would an abstract description of such machine like?
Turing machine
∙ before computers, Turing conceived of a machine that could follow rules:
∙ algorithm: a rule that can be followed by machine
∙ this is not a prototype for a computer more like an ideal model of rule following devices
∙ it’s a rule following machine stripped down to its bare essentials
Alan Turing
∙ logician, philosopher, mathematician
∙ one of the founders of computer science
∙ was involving cracking Nazi codes during World War II
∙ also known for another very influential idea in AI
∙ could computers ever be intelligent?
∙Turing: could they passed the “imitation game”?
∙ this is widely known as a Turing test
the frame problem in context
∙ the frame problem in the nature/ nurture debate
∙ the frame problem in the induction problem
∙ the initial formulation of the frame problem (narrow construction)
∙ the semantic versus the syntactic problems
∙ the epistemological frame problem
∙ the computational versus Hamlet’s real problem
arguments using the frame problem
∙ Fodor
∙ Dreyfus
∙ Possible defense strategies
the frame problem
∙ in order to react properly in a situation, what has to be considered, what can be neglected (Think R2D2 and the bomb in the wagon)?
∙ the number of factors to consider in some situations the enormous
∙ how can the person/system decide, which information is relevant in a given context?
the frame problem: Dennett on midnight snacks
∙ in order to plan an action properly, what has to be considered, what can be neglected?
∙ mayonnaise doesn’t melt lives
∙ opening the fridge doesn’t cause an explosion
∙ with the mayonnaise jar my left and I cannot also be spreading the mayonnaise with my left hand
the frame problem: Dennett on midnight snacks cont’d
and efficient system of information storage requires efficient:
∙ space management: since our brains are not all that large to store all the knowledge
∙ time management: stored information has to be reliably accessible for use in the short real time spans in order to be an intelligent system
computation: deductive logic isn’t the sufficient
∙ world knowledge of the systems are represented in axioms
∙ the logic is used to determine effects of actions
∙ back on axioms: descriptions of the frame: sandwich situation
why deductive logic is insufficient
∙ the psychological plausibility problem:
∙ introspectively it doesn’t seem that humans think like this
∙ so either we do it unconsciously, or we don’t

the property problem:
∙ virtually every aspect of the situation can change under some circumstances
∙ that requires an axiom for every such aspects, since there is a case in which a might change
∙ strategy: stating that only explicit information counts, creates a flow a problem: you can’t just change one aspect, typically comes with other changes (dennett’s plate example).

Dennett on midnight snacks: a possible solution
habits and routines:
∙ idea: maybe Dennett’s midnight snack routines, developed over the years, and guide his actions. The mechanism of some complexity contained subroutines for many spreading, standard making, and getting something out of the fridge.
∙ These quasi-automatic actions (which include subgoal checks), therefore he does not need to consider all hypothetical options.
∙ maybe the problem of induction (Hume) is the frame problem? after all, we want systems to have the right expectations, draw the right inferences
Different formulations of the frame problem
∙ the initial formulation: McCarthy and Hayes 1969 narrow construction: in real-time planning systems with strategic planning, how can we represent the available options without creating information overflow? How can we represent the effects of actions without having to represent the non-effects of explicitly.
Different formulations of the frame problem
∙Dennett: we need to distinguish the semantic problem (or knowledge problem) from the syntactic problem (availability problem)

Semantic problem or Newell’s problem of the knowledge level:
∙ what information must be installed?

Syntactic Problem: What kind of system, what kind of representational format, which structures, processes, or mechanisms do we use to store this information?

The more general frame problem
∙ the frame problem is not just a technical problem of AI, or robotics
∙ it’s a general Vista and logical problem: how do humans (or any intelligent system) know/decide which options to neglect as irrelevant, without computing/creating it as an option before?
the epistemological frame problem:
∙How do humans know which option is to neglect is relevant, without computing, creating it as an option before? When humans consider the consequences of inaction, how do they limit the scope of the reasoning that is required?

∙Dennett: ” how can a cognitive creature with many beliefs about the world update those beliefs when it performs an act so they remain roughly faithful to the world?”

computational and the real frame problem
∙ how can we compute the consequences of inaction without computing all the non-effects of an action?
∙ Hamlet’s Problem: when the stop thinking
∙ even if we solve come additional worry (by using decision Harry sticks for example) the real philosophical issue is still unsolved
∙ how can the robot ever be sure it had sufficiently thought through the consequences of its actions and didn’t miss anything important?
Fodor’s Conclusion:
∙ Fodor 1983 uses the frame problem to argue against central modularity
∙ claim: the mine central processes can draw on information from any source, they are “informationally unencapsulated.”
objections based on the frame problem
∙Dreyfus 1972: most human knowledge and competence, in particular specialized knowledge, cannot in fact be reduced in algorithmic/commutation procedures. Human knowledge is not computable in an AI sense. AI overlooks a principled difference between the kind of cognition one might employ when learning a skill in the kind employed by an expert. Thus, a eyes of fundamentalist mistaken method for studying the mind.
possible defense strategies against Deyfus
∙ strategy one: Dreyfus starts with properties from present-day AI systems, and draws inferences about all possible rule-based informal systems on that basis. But the failures of the particular example of the discipline that is still very young is insufficient to draw the conclusion.
Marvin Minksy and Roger Schank
∙ Late 70s and early 80s
∙ in humans, all of life’s experiences, for all their variety, can be understood as variations of the manageable number of stereotypic themes, and paradigmatic scenarios – “frames” as Minsky calls them, Schanks refers to them as “Scripts.

Answer Dennett: The scripts/frame approach “attempt to resolve the frame problems”, it implements and scripts of problems, a particular system is likely to encounter.

Cognitive wheels
∙ no cognitive wheel is simply any design proposal and cognitive theory (at any level from the purest semantic level most concrete level of wiring diagrams of the neurons) that is profoundly on biological, however wizardly inelegant it is as bit of technology.
∙ Clearly this is a vaguely defined concept, useful only as a rhetoric abbreviation, as a gesture in the direction of real difficulties to be spelled out carefully. Beware of postulating cognitive wheels masquerades as good advice to the cognitive scientist, while courting the vacuity. it occupies the same rhetorical position at stockbrokers next: buy low and sell high. Still, the term is a good theme fixer for discussion.
why cognitive wheels don’t work
“if these procedural details lacks psychological reality then there is nothing left in the proposal that might model cycle article processes except the phenomenological level description the terms of jumping to conclusions, the ignoring, and the like – and we already know we do that.” (pg 14)
strategy two
∙ there might be (maybe general) problem for certain types of systems, classical rule-based systems, but to claim other types systems are able to avoid the frame problem
1) connectionism: ” bottom-up” shortages
2) dynamic, situated approaches
∙ Rodney Brooks attempts to build a simple insect level intelligence without rule-based procedures
∙ Andy Clark: action Baker plantations
overview part two: reverse engineering, research methodologies
∙ top-down versus bottom-up models versus strategies
∙ Marrs levels of explanation
∙Marrs model of research and strategy
Example: visual perception
∙dennett’s hierarchy of levels in the intentional stance
∙CogSci as reverse engineering: the hierarchy of levels
∙Dennett: the problem with reverse engineering
∙ AI and AL
bottom-up and top-down
∙ we can study the mind bottom-up, beginning with individual neurons. Or even molecules. than try to build up from there by reverse engineering to higher cognitive functions
∙or we start with general theories about thought and about how cognition works and then worked downwards to investigate how corresponding mechanisms might be instantiated in the brain
∙ in both cases, we have to consider different levels of explanation, which often correspond to different disciplines
speech perception: bottom-up top-down
∙speech perception is partially data-driven
∙ it can’t be completely data-driven: languages you can understand/ speak, top-down effect depending on interest
∙ the suitable trade-off between top-down and bottom-up influences is a central parameter: system should filter the noise, without over interpreting
Levels of Explanation
the competition level: what is the goal accommodation, why is it appropriate question mark was the logic of strategy by which the goal is carried out
∙ the algorithmic level: what is the competition theory to be implemented? What is your visitation for the input and output? What are the algorithm for the transformation?
∙ the limitation level: how can both the representation and the algorithm be realized physically?
functional level:
∙ The goal of the system is to derive her plantation of a three-dimensional shape and spatial arrangement of an object in the form that allows to to be recognized. Thus, This representation should be object centred (not viewpoint dependent) disease. It should contain information about all parts of the object (including hidden elements).
the hierarchy of levels
∙ the intentional stance: we treat the system as if it is a rational agent who tries to solve up to the task (or set of tasks); we are adjusted in constraints to impose the task, and the general strategies to solve the task.
∙ the design stance: one level lower; we consider the general principles and constraints of the system that might solve the task.
∙ the physical stance a level lower; we consider how a system with a specific design might actually be physically construed.
The physical stance
∙ The physical stance stems from the perspective of the physical sciences. to predict the behavior of a given entity according to the physical stance, we use information about its physical constitution in conjunction with information for the laws of physics.
the physical stance: example
∙ Holding a book in my hands. I predict that it will fall to the floor when I release it. My production relies on a) the fact the book has a mass and weight; and b) the law of gravity. Frictions and explanations based on the physical stance are exceedingly common.

Consider: explanations of why water freezes at 32°F, how mountain ranges are formed. These excavations proceed by way of the physical stance.

the design stance
We make a prediction for the design stance, we assume that the entity in question has been designed in a certain way, and we predict that the entity will thus behave as designed. Like physical stance predictions, designs and predictions are commonplace.
the design stance: example
∙ When someone steps into an elevator pushes”7,” the predicted the elevator will take them to the seventh floor. Again, they do not need to know any details about the inner workings of the elevator in order to make this prediction. There is no need, for example, for them to take apart weight parts.

Likewise: when in the evening student set sthere alarm clock for 8:30 AM, she breaks that it will behave is designed: i.e., that it will buzz at 830 the next morning. B

The intentional stance
∙We can improve our predictions yet further by adopting the intentional stance. When making production from the stands, we interpret the behaviour of the entity in question by treating it as a rational agent with behaviour is governed by intentional states.

Reminder: potential states are mental state such as beliefs and desires which are the property of “aboutness,” that is, they are about, or directed at, objects or states of affairs in the world.

interpretationalism
whether system has a certain belief or desire depends on our imposing a certain interpretation on the system. A statement ascribing is true when the best overall intimidation the system’s behavior said that the organism has the belief or desire. From the intentional stance, we detect certain patterns that, although partially construed by her own reactions to them, our objective.
Realism and instrumentalism
∙ Typically, a realist about the mental treats beliefs and desires as interstates of the system that cause that belief systems behaviour.
∙ in contrast, and instrumentalist treats believes and desires as theoretical posits which we ascribe to various systems when doing so is instrumental to understanding that system’s behavior. These posits, however useful they might be to us, are nonetheless fictions, and thus our ascriptions of beliefs and desires are strictly speaking false according to the instrumentalist.