Free Sample: Artificial intelligence for games paper example for writing essay

Artificial intelligence for games - Essay Example

The new economy of Information technology has shaped the way we are living. The game industry is one of the major components of technology associated with human being. However, there is little understanding on the intelligent agents responsible for the development of games. The purpose of this study Is to Investigate the artificial intelligence games and intelligent agents used to design smart games. The history of games, the challenges of technology change, and the trends in gaming are discussed

In the literature review. The study Is also to provide additional Information on the system overview, design, game theory, game genres, trends, current issues in gaming industry and implementation of AAA in gaming. Introduction: The field of game AAA has existed since the creation of the earliest video games In the sass’s. The field is evolving rapidly, setting new standards for computer hardware engineering and establishing benchmarks for future game industry production. In the past few years, sophisticated game Alls that are more entertaining than simple video game Alls of the past have emerged and dominated the market.

As AD rendering hardware improves and as the high-resolution quality of game graphics have become the De facto Industry standard, game AAA has Increasingly become one of the critical factors determining a game’s success. Programmers dedicated to Just game AAA are now an integral part of the core design group. Programming game AAA is one of the most challenging enhancements that a game developer can engineer; at the annual Game Developer Conference an increasing number of presentations are concerned with AAA techniques.

The real-time performance requirements of computer AMA AAA and the demand for womanlike Interactions, appropriate animation sequences, and internal state simulations for populations of scripted agents have impressively demonstrated the potential of academic AAA research and game AAA technologies. The commercial success of licensed game engines like unreal Tournament have inspired an entire genre of First Person Shooter designs that have incorporated increasingly sophisticated and expert agent behaviors in their productions.

The bots of Epic Games’ unreal Tournament are well known for their scalability and tactical excellence. See, also, Boot Support Hardware performance capabilities and constraints have been a persistent bottleneck to the creation of advanced game AAA. Real-time graphics rendering has traditionally been a huge CPU hog, leaving insignificant time and memory for game AAA (and collision detection). Some fundamental AAA problems, such as path finding, can’t be solved without adequate processor resources. The potential for computer games as a tool for AAA research and education continues to blossom.

Well-designed Games need great intelligence and good strategy. Designing an agent that plays the game is a Hellenizing task, thus it provides an Ideal context for practice of AAA algorithms. Many game-based tools for CSS and AAA educators to use have been presented so far. McGovern et al. And Tender et al. Have proposed several Java-Based games such as Intelligence. However, they don’t provide a single platform for managing the various games used in CSS education. Users can only develop and test their game agents offline and it is difficult for users to compare their agents with others’.

Furthermore, those tools provide no uniform set of interfaces for educators to design different games to teach different algorithms. The Betony Intelligent Agent Platform is designed as an online turn-based strategy game playing system. Students major in Computer Science can create game agents and participate in the contest with others on the platform. In the process, they can learn basic programming skills and many Artificial Intelligence algorithms. Currently, such platforms for educators and researchers to use for turn-based strategy game development are limited.

MUGS is a similar platform, but it only supports normal-form games, which are only a small part of the turn-based strategy game. A turn-based strategy game is a strategy game here players take turns when playing, distinguished from real time strategy where all players play simultaneously. This type of game is commonly seen (such as chess and bridge), so it is easy to be understood by students. Furthermore, according to the complexity of the games, they can be applied in various courses. For example, the simplest Gabon game can be used in introductory classes while the Robotic game can be used in advanced artificial intelligence classes.

So the platform is suitable for students from undergraduates to graduates. We also provides a set of facilities and lean interfaces, allowing users to design different competing situations and evaluate differing types of intelligent agents. We have applied the Betony in an introductory programming course and an advanced algorithm course. We developed two turn- based strategy games for these two courses respectively. We try to lead students to experience the entertainment of programming through the Betony.

As a result, the students are more interested in such a study method, rather than in the traditional ones. In addition, we have applied the Betony in a large-scale live competition. The imitators develop agents for an originally designed game. With the help of the Betony, it is convenient for the organizer of the competition to select those competitors who have deeper understanding in applying AAA algorithms. We first review the structure and discuss the features of the Betony. Then we present the main games on the Betony and the applications of them in courses and competitions.

Finally, we introduce the interfaces for developing new games, and invite users to develop new games or request for educational cooperation. In video games, artificial intelligence is used to produce the illusion of intelligence in the behavior of non-player characters (Naps). The techniques used typically draw upon existing methods from the field of artificial intelligence (AAA). However, the term game AAA is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.

Since game AAA is centered on appearance of intelligence and good gamely, its approach is very different from that of traditional AAA; workarounds and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where Naps’ otherwise perfect aiming would be beyond human skill. History: Game playing was an area of research in AAA from its inception. One of the first 1942.

Despite being advanced technology in the year it was made, over 30 years before Pong, the game took the form of a relatively small box and was able to regularly win games even against highly skilled players of the game. In 1951, using the Frantic Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Print wrote one for chess. These were among he first computer programs ever written. Arthur Samuels checkers program, developed in the middle ass and early ass, eventually achieved sufficient skill to challenge a respectable amateur.

Work on checkers and chess would culminate in the defeat of Garry Sparrow by Vim’s Deep Blue computer in 1997. The first video games developed in the sass and early sass, like Spaceward! , Pong, and Gotcha (1973), were games implemented on discrete logic and strictly based on the competition of two players, without AAA. Games that featured a single player mode with enemies started appearing in the sass. The first notable ones for the arcade appeared in 1974: the Tattoo game Speed Race (racing video game) and the Atari games Awake (duck hunting light gun shooter) and Pursuit (fighter aircraft dog fighting simulator).

Two text-based computer games from 1972, Hunt the Wampum and Star Trek, also had enemies. Enemy movement was based on stored patterns. The incorporation of microprocessors would allow more computation and random elements overlaid into movement patterns. It was during the golden age of video arcade games that the idea of AAA opponents was largely popularized, due to the success of Space Invaders (1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent on hash functions based on the player’s input.

Gilligan (1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation. Pace-Man (1980) introduced AAA patterns to maze games, with the added quirk of different personalities for each enemy. Karate Champ (1984) later introduced AAA patterns to fighting games, although the poor AAA prompted the release of a second version.

First Queen (1988) was a tactical action RPG which featured characters that can be enthroned by the computer’s AAA in following the The role-playing video game Dragon Quest IV (1990) introduced a “Tactics” system, where the user can adjust the AAA routines of non-player characters during battle, a concept later introduced to the action role-playing game genre by Secret of Man (1993). Games like Madden Football, Earl Weaver Baseball and Tony La Russia Baseball all based their AAA on an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity.

Madden, Weaver and La Russia all did extensive work with these game development teams to maximize the accuracy of the games. Later sports titles allowed users to “tune” variables in the AAA to produce a player-defined managerial or coaching strategy. The emergence of new game genres in the sass prompted the use of formal AAA tools like finite state machines. Real-time strategy games taxed the AAA with many objects, incomplete information, pathfinder problems, real-time decisions and economic planning, among other things. The first games of the genre had notorious problems.

Herzog Ewes (1989), for example, had almost broken pathfinder and very basic three-state state machines for unit control, and Dune II (1992) attacked the players’ base in a beeline and used numerous cheats. Bottom-up AAA methods, such as the emergent behavior and evaluation of player actions in games like Creatures or Black & White. Fade (interactive story) was released in 2005 and used interactive multiple way dialogs and AAA as the main aspect of game. Games have provided an environment for developing artificial intelligence with potential applications beyond game play.

Examples include Watson, a Jeopardy- playing computer; and the Robotic tournament, where robots are trained to compete in soccer. Purists complain that the “AAA” in the term “game AAA” overstates its worth, as game AAA is not about intelligence, and shares few of the objectives of the academic field of AAA. Whereas “real” AAA addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal of strong AAA that can reason, “game AAA” often consists of a half-dozen rules of thumb, or heuristics, that are Just enough to give a good gamely experience.

Game developers’ increasing awareness of academic AAA and a growing interest in computer games by the academic community is causing the definition of what counts as AAA in a game to come less idiosyncratic. Nevertheless, significant differences between different application domains of AAA mean that game AAA can still be viewed as a distinct subfield of AAA. In particular, the ability to legitimately solve some AAA problems in games by cheating creates an important distinction.

For example, inferring the position of an unseen object from past observations can be a difficult problem when AAA is applied to robotics, but in a computer game an NP can simply look up the position in the game’s scene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AAA and leads to new problems to solve, such as when and how to use cheating. The Betony Intelligent Agent Platform is designed as an online turn-based strategy game playing system. A turn-based strategy game is a strategy game where players take turns when playing, distinguished from real time .

Modern Computer Games: The use of game applications has a long tradition in artificial intelligence. Games provide high variability and scalability for problem definitions, are processed in a restricted domain and the results are generally easy to evaluate. But there is also a great deal of interest on the commercial side, the “AAA inside” feature being a high- priority task in the fast-growing, multi-billion-dollar electronic gaming market (the revenue from PC games software alone is already as big as that of box office movies .

Many “traditional” games, such as card/board/puzzle games like Go-Mock and the Nine Men’s Morris , have recently been solved by AAA techniques. Deep Blue’s victory over Sparrow was another milestone event here. However, it is highly questionable whether and to what extend the techniques used in this field of research can be applied to today’s “modern” computer games. Of the techniques used in this field, and its variants/extensions are practically the only ones employed in modern computer games. Such games pose problems for AAA that is infinitely more complex than those of traditional games.

AAA techniques can be applied too variety of tasks in modern computer games. A game that uses probabilistic networks to predict the although AAA must not always be personified, the notion of artificial intelligence in computer games is primarily related to characters. These characters can be seen as agents, their properties perfectly fitting the AAA agent concept. But how does the player of a computer game perceive the intelligence of a game agent/character? Important dimensions include physical characteristics, language cues, behaviors and social skills.

Physical characteristics like attractiveness are more a matter for psychologists and visual artists. Language skills are not normally needed by game agents and are ignored here too. The most important question when Judging an agent’s intelligence is the goal directed component, which we look at in the rest of this paper. The standard procedure followed in modern computer games to implement a goal-directed behavior is to use predetermined behavior patterns. This is normally done using simple if-then rules.

In more sophisticated approaches using neural networks, behavior becomes adaptive, but the purely reactive property has still not been overcome. Many computer games circumvent the problem of applying sophisticated AAA techniques by allowing computer-guided agents to cheat. But the credibility of an environment featuring cheating agents is very hard to ensure, given the constant growth of the complexity and variability in computer-game environments. Consider a situation in which a player destroys a communication icicle in an enemy convoy in order to stop the enemy communicating with its headquarters.

If the game cheats in order to avoid a realistic simulation of the characters’ behavior, directly accessing the game’s internal map information, the enemy’s headquarters may nonetheless be aware of the player’s subsequent attack on the convoy. Classes of Intelligent agents: In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes through sensors and acts upon an environment using actuators (I. E. It is an agent) and directs its activity towards achieving goals (I. E. It is rational). Intelligent gents may also learn or use knowledge to achieve their goals.

They may be very simple or very complex: a reflex machine such as a thermostat is an intelligent agent, as is a human being, as is a community of human beings working together towards a goal. Intelligent agents can be classified into five classes based on their degree of perceive intelligence and capability 1. Simple reflex agents 2. Model-based reflex agents 3. Goal-based agents 4. Utility-based agents 5. Learning agents Simple reflex agents: Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the condition-action rule: if condition then action.

This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can reflex agents: A model-based agent can handle a partially observable environment. Its current state is stored inside the agent maintaining some kind of structure which ascribes the part of the world which cannot be seen.

This knowledge about “how the world works” is called a model of the world, hence the name “model-based agent”. A model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. It then chooses an action in the same way as the reflex agent. Goal-based agents: Goal-based agents further expand on the capabilities of the model-based agents, by using “goal” information. Goal information describes situations that are desirable.

This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent’s goals. Len some instances the goal-based agent appears to be less efficient; it is more flexible because the knowledge that supports its decisions is represented explicitly and can be modified. Utility-based agents: Goal-based agents only distinguish between goal states and non-goal states. It is possible to define a measure of how desirable a particular state is.

This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. The term utility can be used to describe how “happy” the agent is. A rational utility- based agent chooses the action that maximizes the expected utility of the action outcomes- that is, the agent expects to derive, on average, given the probabilities and utilities of each outcome.

A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning. Learning agents: Learning has an advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The most important distinction is between the “learning element”, which is responsible for making improvements, and the “performance element”, which is responsible for selecting external actions.

The learning element uses feedback from the “critic” on how the agent is doing and determines how the performance element should be modified to o better in the future. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions. The last component of the learning agent is the “problem generator”. It is responsible for suggesting actions that will lead to new and informative experiences.

According to other sources, some of the sub-agents (not already mentioned in this treatment) that may be a part of an Intelligent Agent or a complete Intelligent Agent in themselves are: Decision Agents (that are geared to decision making) Input Agents (that process and make sense of sensor inputs – e. G. Rural network based agents) Processing Agents (that solve a problem like speech recognition); Spatial Agents (that relate to the physical real-world); World Agents (that incorporate a combination of all the other classes of agents to allow autonomous behaviors).

Believable agents – An agent exhibiting a personality Physical Agents – A physical agent is an entity which percepts through sensors and acts through actuators. Temporal Agents – A temporal agent may use time based stored information to offer instructions or data acts to a car program or human being and takes program inputs percepts to adjust its next behaviors. Game Theory in artificial Intelligence: Game theory is a branch of mathematics devoted to studying interaction among rational and self-interested agents. The field took on its modern form in the sass and sass with even earlier antecedents.

Although it has had occasional and significant overlap with computer science over the years, game theory received most of its early study by economists. Indeed, game theory now serves as perhaps the main analytical framework in microeconomic theory, as evidenced by its prominent role in economics textbooks and by the many Nobel prizes awarded to prominent game theorists. Artificial intelligence got its start shortly after game theory and indeed pioneers such as von Neumann and Simon made early contributions to both fields.

Both game theory and AAA draw on decision theory. For example, one prominent view defines artificial intelligence as the study and construction of rational agents and hence takes a decision-theoretic approach when the world is stochastic. However, artificial intelligence spent most of its first forty years focused on design and analysis of agents that act in isolation, and hence had little need for game theoretic analysis. Starting in the mid to late sass, game theory became a major epic of study for computer scientists, for at least two main reasons.

First, economists began to be interested in systems whose computational properties posed serious barriers to practical use, and hence reached out to computer scientists; notably, this occurred around the study of combinatorial auctions. Second, the rise of distributed computing in general and the Internet in particular made it increasingly necessary for computer scientists to study settings in which intelligent agents reason about and interact with other agents. Game theory generalizes the decision-theoretic approach which was already widely adopted by computer scientists, and so was a natural choice.

The resulting research area, fusing a computational approach with game theoretic models, has come to be called Algorithmic Game Theory. This field has grown considerably in the last few years. It has a significant and growing presence in major AAA conferences such as ACACIA, AAA, and MAMAS, and in Journals such as AU, AJAR and JAMS. It also has three dedicated archival conferences of its own: the ACM Conference on Electronic Commerce (ACM-CE), the Workshop on Internet and Network Economics (WINE) and the Symposium on Algorithmic Game Theory (SAT).

It is necessary to distinguish algorithmic game theory from a somewhat older and considerably broader research area within AAA. While multi agent systems indeed encompasses most game-theoretic work within AAA, it has a much wider ambit, also including non-game-theoretic topics such as software engineering paradigms, distributed constraint satisfaction and optimization, logical reasoning about other agents’ beliefs and intentions, task sharing, argumentation, distributed sensing, and multi-robot coordination. Algorithmic game theory has received considerable recent study outside artificial intelligence.

The term first gained currency among computer science theorists, and is now used beyond that community in networking, security, catch on in AAA, and to date the moniker multi agent systems” is more broadly used. We argue, however, that there are advantages to designating some AAA research as algorithmic game theory. First, the use of this label stresses commonalities between AAA research and work by computer scientists in other areas, particularly theorists. It is important to ensure that AAA research remains connected to this quickly growing body of work, for the benefit of researchers both inside and outside of AAA.

Second, at this mint multi agent systems is a huge research area, and only some of this research is game theoretic. It is thus sensible to have a coherent name for multivalent systems work that takes a game-theoretic approach. At this point the reader might wonder what characterizes AAA work within algorithmic game theory, as distinct e. G. , from work in the theory community. While it is difficult to draw sharp distinctions between these literatures, we note two key differences in the sorts of questions emphasized.

First, algorithmic game theory researchers in AAA are often interested in reasoning about practical multi agent systems. AAA work has thus tended to emphasize elaborating theoretical models to make them more realistic, scaling up to larger problems, using computational techniques in settings too complex for analysis, and addressing prescriptive questions about how agents should behave in the face of competition. Second, AAA has long studied practical techniques for solving computationally hard problems, and many of these techniques have found application to problems in game theory.

Algorithmic game theory work in AAA thus often emphasizes methods for solving practical problems under resource constraints, rather than considering mutational hardness results to be insurmountable roadblocks. Communication Theory: The interferences that were experienced before are mainly due to the robots using Industrial, Scientific and Medical (ISM) bands. Many electronic communication units use the ISM bands which are unlicensed frequencies that have certain constraints. As USSR robots are used to save lives, it is suggested that licensed frequencies are utilized. This will significantly prevent interferences.

The output power between the control unit and the robot can be constrained to prevent a signal from one unit overwhelming the signals from other units. Another reason for failed robot communication is the loss of signals between the robot and its control unit. This is mainly caused by the frequency used. As wavelength is inversely proportional to the frequency and the antenna size is proportional to the wavelength therefore the higher the frequency, the smaller the antenna will be. Transmission efficiency decreases as higher frequencies are used. The signal penetration into buildings is also effected by the frequency used.

Higher frequencies are capable of penetrating more dense materials that lower frequencies. The disadvantage of higher recurrence is that small items, such as dust particles, resonate at the high frequency therefore causing it to absorb the power of the signal. Therefore it is best to use a frequency in the center of the two extremes that will allow optimization for radio communication. The comparison of the different factors that are considered are shown in figure 2. Subsequently the decision is, to use UHF frequencies as these are able to penetrate with a relatively low power output and have a relatively good signal penetration property.

System Overview: available for public use on our servers. Users may register accounts on the website ND log in. Then they can view the description and the programming interfaces of the game. Once users have finished developing their agents, they can upload the source code of the agents using a simple web form. After that, the users can participate in the contests held on the platform with their own agents. They can also Just start a single match with other agents on the platform. A match is a single game process which is played by several agents. The agents take actions by turns until the game is over.

The result of the game can reflect relative intelligent levels of the agents. A contest is a set of organized matches. It is participated in by a set of agents. The objective of a contest is to rank the agents by the relative intelligent levels. The matches of a contest can be organized in different competition systems, such as round robins and Swiss systems. A series of contests can be established for a long-term project, such as a full-semester course or a large- scale competition. The Betony can be divided into three parts: the fronted, the storage system and the Judge system.

The relationship among the three parts is shown in the below figure. The fronted mainly refers to the Betony website. Most of the interactions with seers are completed by this part, such as registering accounts, uploading agents, attending contests, viewing results etc. The storage system consists of the storage of the executable files of the agents, logs, and the database storing information about users, agents and contests. The Judge system consists of the contest management module and the Judge module. The contest management module can run a contest according to a certain competition system.

It decides the players of each single match, and then calls the Judge module to run the match and record the logs, including the winner, the scores and the process of the match. The Betony has supported several functions which make it more entertaining as well as educational. First, it provides a visualized way to view the match process other than log files. There are FLASH animations for each game, which can replay the process of a certain match according to the log files. We find that users of the Betony spend over half of the time on watching the FLASH animations.

According to the feedbacks of the students, it also makes them more interested in developing agents and participating in the contests. Second, we allow users to choose their opponents among thousands of the agents on our platform. In this way, users can learn AAA strategies from others’ agents and improve their own algorithms. We have also provided a privacy function to protect users’ agents. If one would not like to share his agents, he can set the type of the agent to be private and then the agent is no longer available to be chosen as opponent by others.

Betony runs in dynamic environment with multiple agents in a sequential and continuous manner. It is designed based on the round-robin and Swiss system. In Round Robin approach each player meets all other contestants in turn where as Swiss system all the players need to be paired to play each other in several mounds of competition. Judge, Control, Display and Introduction files are required to for all new games to be run on the server. Communication is established between controller and Judge, Judge and controller by using following functions.

Problem Statement: Game software is one of the key components of modern software Engineering. However there is little literature/understanding on how those IA in work in various game software. The purpose of this study is to determine the components of AAA, how they work, environment, challenges and the relationship between good IA and high performance games. Techniques and Principles of Game AAA Design: Game industry observers expect that the next revolution in game AAA will be learning and agent adaptation.

Developers have been pursuing and researching learning techniques. To be believable, the AAA in a game must simulate cognition, sense the environment realistically, and act convincingly within that context. In defining game AAA, the programmer will have to code agent activity and behavior so that characters appear intelligent and respond realistically to perceived conditions and situations. Ironically, the simplest AAA techniques – finite-state machines, decision trees, and reduction rule systems – have been most successfully used by the game AAA community.

The following is a listing of AAA techniques that are relevant to present and future game AAA: Expert Systems An expert system represents expertise within a knowledge database and performs automated reasoning in response to a series of queries. Finite-State Machines Simple, rule-based systems in which a finite number of “states” are connected in a directed graph by “transitions” between states. Finite- State Machines are the most used software AAA in the computer game industry. They are easy to program, simple to understand, and easy to debug. Production Systems Comprised of a database of associated rules.

Rules are conditional program statements with consequent actions that are performed if the specified conditions are satisfied. Decision Trees Data structure created by an algorithm that outputs an activity decision based on automatic selection from input data and a static node tree. Case-Based Reasoning Analysis of set of inputs and comparison to a database of possible situations and advisable behavioral outputs. Genetic Algorithms Genetic Programming techniques attempt to imitate the dynamics of macro-evolution by performing directed selection and modification on associations of programs, sub- outings, and assemblages of parameters.

Neural Networks Class of machine learning techniques based on the architecture of interconnected neural components of a network. Operates through repeatedly adjusting internal numeric parameters with the goal of optimizing response to a wide variety of circumstances. Fuzzy Logic Utilizes numeric values to represent a degree of order in specified relationships or phenomena. The technique allows for more expressive reasoning and greater subtlety and richness than traditional Boolean or Bayesian inference.