He had a strong desire to make the model ‘intelligent’, so that it could act and think as he id. This, however, was a much more complex task than what he had done before. So, he took millions of years to construct an ‘analytical engine’ that could perform a little arithmetic mechanically. Baggage’s analytical engine was the first significant success in the modern era of computing. Computers of the first generation, which were realized following this revolutionary success, were made of thermo-ionic valves. They could perform the so-called ‘number crunching operations.
The second-generation computers came up shortly after the invention of transistors and were more miniaturized in size. They were mainly used for commercial data processing and payroll creation. After more than a decade or so, when the semiconductor industries started producing integrated circuits (C) in bulk, the third generation computers were launched in business houses. These machines had an immense capability to perform massive computations in real time. Many electromechanical robots were also designed with these computers. Then after another decade, the fourth generation computers came up with the high-speed VEILS engines.
Many electronic robots that can see through cameras to locate objects for placement at the desired actions were realized during this period. During the period of 1981-1990 the Japanese Government started to produce the fifth generation computing machines that, besides having all the capabilities of the fourth generation machines, could also be able to process intelligence. The computers of the current (fifth) generation can process natural languages, play games, recognize images of objects and prove mathematical theorems, all of which lie in the domain of Artificial Intelligence (AAA).
II: Introduction Artificial intelligence (AAA) is the intelligence of machines and the branch of computer science that aims to create it. The textbooks AAA define the field as “the study and design of intelligent agents” where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as “the science and engineering of making intelligent machines, especially intelligent computer programs. The sapience of Homo sapiens;can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed y myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AAA research is highly technical and specialized, deeply divided into subfields that often fail in the task of communicating with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AAA should be done ND the application of widely differing tools. The central problems of AAA include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence (or “strong AAA”) is still among the field’s long term goals.
Ill: History The history of artificial intelligence began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCormick writes, AAA began with “an ancient wish to forge the gods. ” The seeds of modern AAA were planted by classical philosophers who attempted to scribe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the sass, a machine based on the abstract essence of mathematical reasoning.
This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The field of artificial intelligence research was founded at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AAA research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of Sir James Lighting and ongoing pressure from congress, the U. S. And British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AAA with billions of dollars, UT by the late ass the investors became disillusioned and withdrew funding again. This cycle of boom and bust, of “AAA winters” and summers, continues to haunt the field.
Undaunted, there are those who make extraordinary predictions even now. Progress in AAA has continued, despite the rise and fall of its reputation in the eyes of government bureaucrats and venture capitalists. Problems that had begun to seem commercial products. However, no machine has been built with a human level of intelligence, contrary to the optimistic predictions of the first generation of AAA researchers. We can only see a short distance ahead,” admitted Alan Turing, in a famous 1950 paper that catcalled the modern search for machines that think. But,” he added, “we can see much that must be done. ” lb. Concepts What is Artificial Intelligence? “The science and engineering of making intelligent machines, especially intelligent computer programs. ” Artificial Intelligence is made up of two words: Artificial: It means something that is not natural but is made by human skills or produced by the humans. It implies creating a copy or imitation of human. Though we can make a aching artificially similar to human but it lacks spontaneity and naturalness.
Intelligence: It implies injecting intelligence into a machine so that it can perform the work which would otherwise require human brain. The device should be able to take its own decision according to a particular situation. For example, the game chess on a computer is artificially intelligent. The computer plays its moves according to the moves of the opponent rather than having any fixed moves. In order to make a machine artificially intelligent, it should possess the following capabilities: learning, reasoning, problem solving, perception, etc.
V: Branches of AAA Here’s a list of some of the branches of AAA: Logical AAA: What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [Masc..]. [Masc..] is a more recent summary. [Macomb] lists some of the concepts involved in logical [Shahs] is an important text. Search: AAA programs often examine large numbers of possibilities, e. Moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains. Pattern recognition: When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e. G. In a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods Representation:
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used. Inference: From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the sass. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin.
It is the possibility that a conclusion may have to be withdrawn that constitutes the monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non- monotonic reasoning. Common sense knowledge and reasoning: This is the area in which AAA is farthest from human-level, in spite of the fact that it has been an active research area since the sass. While there has been considerable progress, e. G. N developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Icy system contains a large but spotty collection of common sense facts. Learning from experience: Programs do that. The approaches to AAA based on connectionist and neural nets specialize in that. There is also learning of laws expressed in logic. [Mitts] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
Planning: Planning programs start with general facts about the world (especially facts about he effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is Just a sequence of actions. Epistemology: This is a study of the kinds of knowledge that are required for solving problems in the world. Ontology: Ontology is the study of the kinds of things that exist. In AAA, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are.
Emphasis on ontology begins in the sass. Heuristics: A heuristic is a way of trying to discover something or an idea embedded in a approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, I. E. Constitutes an advance toward the goal, may be more useful. [My opinion]. Genetic programming: Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations.
It is being developed by John Kazoo’s group and here’s a tutorial. VI: Artificial Intelligence in fiction AAA in Service to Society: In these stories humanity (or organic life) remains in authority over robots. Often the robots are programmed specifically to maintain this relationship, as in the Three Laws of Robotics. Robber from Forbidden Planet is incapable of harming intelligent life even when ordered to do so. Corcoran from the Halo series; although she is capable of single handily controlling the Pillar of Autumn, she is only a subordinate on the ship, only ordered by Captain Keyes. When he gave her orders, she responded with “Aye aye, sir,” before disappearing. ) This would imply that shipboard Alls are only responsible to the obtain. Alls only occupy posts as instructors or advisors, never as superiors. In the Alien movies, not only is the Nostrum, the spaceship aboard which the first takes place, somewhat intelligent (the crew call it “Mother”), but there are also androids in the society, which are called “synthetics” or “artificial persons,” that are so perfect imitations of humans that they are not discriminated.
In Tibetan Sun, the Brotherhood of Nod designed a self-aware AAA named CABAL (Computer Assisted Biologically Augmented Lifework, meaning that the Ass’s processing capabilities have been improved by using the brains of several dozen unmans in stasis) to coordinate their forces until their defeat in the Second Tiber War. After the war, CABAL was disassembled by GUI but the core was stolen back by Nod to resume their operations. It was ultimately recaptured by GUI to help translate the Tactics (of the two other entities who were able to do it, Kane was missing and Tracts was assassinated by CABAL shortly before).
However, as soon as the Tactics was both factions. CABAL was finally put down by an unholy alliance between GUI and Nod forces and its core was later used by Kane to create LEGION. GUI also possessed AAA systems, nicknamed Eves. At first they served as com links between commanders and field troops, but later improvements enabled Eves to think blindingly fast, assist in the tracing of calls, calculate the best options for attacking bases, coordinate the ion cannon network and all battlefield communications, as well as serve as a videoconferencing conduit.
One of the greatest achievements of Eve’s builders and designers was to keep the EVA network functioning during an ion storm. In sharp contrast to CABAL and LEGION, all EVA units are non-sentient, though at some point between 1995 and 2030, GUI was able to crack the Turing test. The Merger of AAA with Humanity: In these stories humanity has become the AAA In works such as the Japanese magna Ghost in the Shell, the existence of intelligent machines brings into questions the requirement that life be organic, rather than a broader category of autonomous entities, establishing a notional concept of systemic intelligence.
The series also explores the merging of man and machine; most humans had physical and mental enhancing cybernetic implants. The mind interface allowed one to dive (as opposed to surf) the web by thought alone. The Borg from the Star Trek: The Next Generation represent a transmuting scenario. They are a race of cowboys without individuality, but who take part in a Collective. In the Commonwealth Novels there is a sentient machine race that asked to leave the service of mankind called the SSL (Sentient Intelligence). It lives peacefully in isolation on its own planet and allows humans to download their minds into it upon their death.
In the Novels set 1000 years after the commonwealth there is a computer system called ANA (Advanced Neural Activity) where minds are transferred to after a person grows tired of life, and can live out the rest of their existence in a virtual reality. In the movie D. A. R. Y. L. Scientists replace a young boy’s brain with a computer. Explored, with many humans augmenting their minds with cybernetic implants. Some are even described as “hammer”, a portmanteau or human and AAA, where extreme augmentation has led to the blurring of the line between a natural and an artificial mind. VI’: Problems “Can a machine act intelligently? Is still an open problem. Taking “A machine can act intelligently” as a working hypothesis, many researchers have attempted to build such a machine. The general problem of simulating (or creating) intelligence has been broken down onto a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention. 7. 1 Deduction, reasoning, problem solving Early AAA researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions.
By the late sass and ‘ass, AAA research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. For difficult problems, most of these algorithms can require enormous computational resources ; most experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AAA research.
Human beings solve most of their problems using fast, intuitive Judgments rather than the conscious, step-by-step deduction that early AAA research was able to model. AAA has made some progress at imitating this kind of “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of seniority skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that give rise to this skill. 7. 2 Knowledge representation An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
Knowledge representation and knowledge engineering are central to AAA research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AAA needs to represent are: objects, properties, categories and relations between objects; tuitions, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper anthologies.
Among the most difficult problems in knowledge representations are: Default Many of the things people know take the form of “working assumptions. ” For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any common sense rule that AAA researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires.
AAA research has explored a number of solutions to this problem. The breadth of commonsense knowledge: The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e. G. Icy) require enormous amounts of laborious ontological engineering ; they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.
The subassembly form of some commonsense knowledge: Much of what people know is not represented as “facts” or “statements” that they could express verbally. For example, a chess master will avoid a particular chess position because it “feels too exposed” or an art critic can take one look at a statue ND instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge.
As with the related problem of sub-symbolic reasoning, it is hoped that situated AAA or computational intelligence will provide ways to represent this kind of knowledge. 7. 3 Planning A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy. Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or “value”) of the available choices.
In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty. Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence. . 4 Learning Machine learning has been central to AAA research from the beginning. In 1956, at the unsupervised probabilistic machine learning: “An Inductive Inference Machine”. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples f things from several categories.
Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. 7. 5 Motion and manipulation The field of robotics is closely related to AAA.
Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there). 7. 6 Perception Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected sub problems are speech recognition, facial recognition and object recognition. 7. 7 Social intelligence
Kismet, a robot with rudimentary social skills Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions. ) Also, for good human-computer interaction, an intelligent machine needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.