In the previous post we define rational agent. the next important thing, we think about is task environments. Task environments are essentially the problem to which the rational agents are the solution. In this post we begin with showing how to define the task environment and then show the flavor of the task environment. The type of the task environment is directly influence the design of the agent program.
How to specify the Task Environment?
Task environment consists of Performance, Environment, Actuators and Sensors. Collectively we call them PEAS description. In designing the rational agent, first step is to clearly define the task environment. Task description(PEAS) for autonomous taxi is given below:
Properties of Task Environment
The range of task environments that might arises in AI is obviously vast. However, we can identify a fairly small number of dimensions along which task environment can be classified. These dimensions determines, to a large extent, the appropriate agent design and the applicability of each of the principal families of techniques for agent implementation.
1. Fully Observable vs. Partially Observable
If an agent’s sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. A task environment is effectively fully observable if the sensor detect all aspects that are relevant to the choice of action(relevance depends on performance measure). Fully observable environments are convenient because the agent need not maintain any internal state to keep the track of the world.
An environment is partially observable because of noisy and inaccurate sensors or parts of the states are simply missing from the sensor data. If the agent has no sensor at all then the environment is unobservable.
2. Single Agent vs. Multi Agent
Distinction between single agent and multi agent environments may seems simple enough. For example, an agent solving a crossword puzzle by itself is clearly a single agent environment, whereas an agent playing chess is in a two agent environment.
In multi agent environment if all agent try to maximizing each others performance measure then environment is cooperative otherwise the agent is competitive.
3. Deterministic vs. Stochastic
If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise it is stochastic.
In principle, an agent need not worry about uncertainty in a fully observable, deterministic environment. However, if the environment is partially observable, then it could be appear to be stochastic.
4. Episodic vs. Sequential
In an episodic task environment, the agent’s action is divided into atomic episodes. In each episode, agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes. Many classification tasks are episodic. for example, an agent that has to spot defective parts on an assembly line. Its decision based on the current part, regardless of previous decisions. Moreover, the current decision doesn’t affect whether the next part is defective.
In sequential environments, the current decision could affect all future decisions. for example chess and taxi driving, in both cases, short term actions can have long-term consequences.
5. Static vs. Dynamic
If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent, otherwise, it is static. Static environments are easy to deal with because the agent need not keep looking while it is deciding on an action, nor need it worry about the passage of time.
If the environment itself does not change with passage of time but the agent’s performance score does then we say that the environment is semi-dynamic.
Car driving is dynamic, chess playing when played with clock is semi dynamic and crossword puzzles are static.
6. Discrete vs. Continuous
Discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percept and the action of the agent. For example, the chess environment has a finite number of distinct states. Chess also has a discrete set of percept and actions. Taxi driving is a continuous-state and continuous-problem.
7. Known vs. Unknown
In a known environment, the outcomes for all actions are given. However, if the environment is unknown, the agent will have to learn how it works in order to make good decisions.
As one may expect the hardest case is partially observable, multi agent, stochastic, sequential, dynamic, continuous and unknown. Taxi driving is hard in all these senses, except that for the most part the driver’s environment is known.
This article is contributed by Ram Kripal. If you like eLgo Academy and would like to contribute, you can mail your article to email@example.com. See your article appearing on the eLgo Academy page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.