∙ they can creatively, abstractly
∙ reasoning, using logic
∙ planning, solving problems
∙ combining ideas and productive ways
like many steam. Concepts, and may not have a single, brief definition
∙ but maybe we can recognize it when we see it?
“For ‘reason’ in this sense is nothing but ‘ ‘reckoning’, that is adding and subtracting, of the consequences of general names agreed upon for the ‘marking’ and ‘signifying’ of our thoughts” (Thomas Hobbes).
1) if it’s raining, and my soccer match will be canceled.
2) if my soccer matches canceled, then either I will work on my paper or remember
3) but I cannot work on my papers since I don’t have my laptop
4) it is raining, sorry my book
if P, then Q.
if Q, than either R or S.
Not R, since T.
P, therefore S.
∙ could we design a machine that can carry out such operations?
∙ What are the bare minimum requirements for such a machine?
∙ what would an abstract description of such machine like?
∙ algorithm: a rule that can be followed by machine
∙ this is not a prototype for a computer more like an ideal model of rule following devices
∙ it’s a rule following machine stripped down to its bare essentials
∙ one of the founders of computer science
∙ was involving cracking Nazi codes during World War II
∙ also known for another very influential idea in AI
∙ could computers ever be intelligent?
∙Turing: could they passed the “imitation game”?
∙ this is widely known as a Turing test
∙ the frame problem in the induction problem
∙ the initial formulation of the frame problem (narrow construction)
∙ the semantic versus the syntactic problems
∙ the epistemological frame problem
∙ the computational versus Hamlet’s real problem
∙ Possible defense strategies
∙ the number of factors to consider in some situations the enormous
∙ how can the person/system decide, which information is relevant in a given context?
∙ mayonnaise doesn’t melt lives
∙ opening the fridge doesn’t cause an explosion
∙ with the mayonnaise jar my left and I cannot also be spreading the mayonnaise with my left hand
∙ space management: since our brains are not all that large to store all the knowledge
∙ time management: stored information has to be reliably accessible for use in the short real time spans in order to be an intelligent system
∙ the logic is used to determine effects of actions
∙ back on axioms: descriptions of the frame: sandwich situation
∙ introspectively it doesn’t seem that humans think like this
∙ so either we do it unconsciously, or we don’t
the property problem:
∙ virtually every aspect of the situation can change under some circumstances
∙ that requires an axiom for every such aspects, since there is a case in which a might change
∙ strategy: stating that only explicit information counts, creates a flow a problem: you can’t just change one aspect, typically comes with other changes (dennett’s plate example).
∙ idea: maybe Dennett’s midnight snack routines, developed over the years, and guide his actions. The mechanism of some complexity contained subroutines for many spreading, standard making, and getting something out of the fridge.
∙ These quasi-automatic actions (which include subgoal checks), therefore he does not need to consider all hypothetical options.
∙ maybe the problem of induction (Hume) is the frame problem? after all, we want systems to have the right expectations, draw the right inferences
Semantic problem or Newell’s problem of the knowledge level:
∙ what information must be installed?
Syntactic Problem: What kind of system, what kind of representational format, which structures, processes, or mechanisms do we use to store this information?
∙ it’s a general Vista and logical problem: how do humans (or any intelligent system) know/decide which options to neglect as irrelevant, without computing/creating it as an option before?
∙Dennett: ” how can a cognitive creature with many beliefs about the world update those beliefs when it performs an act so they remain roughly faithful to the world?”
∙ Hamlet’s Problem: when the stop thinking
∙ even if we solve come additional worry (by using decision Harry sticks for example) the real philosophical issue is still unsolved
∙ how can the robot ever be sure it had sufficiently thought through the consequences of its actions and didn’t miss anything important?
∙ claim: the mine central processes can draw on information from any source, they are “informationally unencapsulated.”
∙ in humans, all of life’s experiences, for all their variety, can be understood as variations of the manageable number of stereotypic themes, and paradigmatic scenarios – “frames” as Minsky calls them, Schanks refers to them as “Scripts.
Answer Dennett: The scripts/frame approach “attempt to resolve the frame problems”, it implements and scripts of problems, a particular system is likely to encounter.
∙ Clearly this is a vaguely defined concept, useful only as a rhetoric abbreviation, as a gesture in the direction of real difficulties to be spelled out carefully. Beware of postulating cognitive wheels masquerades as good advice to the cognitive scientist, while courting the vacuity. it occupies the same rhetorical position at stockbrokers next: buy low and sell high. Still, the term is a good theme fixer for discussion.
1) connectionism: ” bottom-up” shortages
2) dynamic, situated approaches
∙ Rodney Brooks attempts to build a simple insect level intelligence without rule-based procedures
∙ Andy Clark: action Baker plantations
∙ Marrs levels of explanation
∙Marrs model of research and strategy
Example: visual perception
∙dennett’s hierarchy of levels in the intentional stance
∙CogSci as reverse engineering: the hierarchy of levels
∙Dennett: the problem with reverse engineering
∙ AI and AL
∙or we start with general theories about thought and about how cognition works and then worked downwards to investigate how corresponding mechanisms might be instantiated in the brain
∙ in both cases, we have to consider different levels of explanation, which often correspond to different disciplines
∙ it can’t be completely data-driven: languages you can understand/ speak, top-down effect depending on interest
∙ the suitable trade-off between top-down and bottom-up influences is a central parameter: system should filter the noise, without over interpreting
∙ the algorithmic level: what is the competition theory to be implemented? What is your visitation for the input and output? What are the algorithm for the transformation?
∙ the limitation level: how can both the representation and the algorithm be realized physically?
∙ the design stance: one level lower; we consider the general principles and constraints of the system that might solve the task.
∙ the physical stance a level lower; we consider how a system with a specific design might actually be physically construed.
Consider: explanations of why water freezes at 32°F, how mountain ranges are formed. These excavations proceed by way of the physical stance.
Likewise: when in the evening student set sthere alarm clock for 8:30 AM, she breaks that it will behave is designed: i.e., that it will buzz at 830 the next morning. B
Reminder: potential states are mental state such as beliefs and desires which are the property of “aboutness,” that is, they are about, or directed at, objects or states of affairs in the world.
∙ in contrast, and instrumentalist treats believes and desires as theoretical posits which we ascribe to various systems when doing so is instrumental to understanding that system’s behavior. These posits, however useful they might be to us, are nonetheless fictions, and thus our ascriptions of beliefs and desires are strictly speaking false according to the instrumentalist.