Three fundamental things in any artificial system is agent, environment and coupling between them. By thinking in this framework, it is easy to understand and analyze the AI system. In this post we introduced the concept of Artificial Intelligence.
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon the environment through actuators. This simple idea is illustrated in the below figure:
A human agent has eyes, ears and other organs as sensors to perceive the world and hands, legs, vocal tract etc. as actuators. A robotic agent might have camera and infrared range finder as sensors and various motors for actuators. A software agent receives key stroke, file content and network packet as sensory input and act on the environment by displaying on the screen, writing files and sending network packets.
We use the term percept to refer the agent’s perceptual input at any given instance. Percept Sequence is the complete history of everything the agent has ever perceived.
In general, an agent’s choice of action at any given instant can depend on the entire precept sequence observe to date, but not on anything it hasn’t perceived.
One thing more left to describe the Agent completely. and what is that, it is: “How the agent know what action is performed on given percept?”. We must specify the agent’s choice of action for every possible percept sequence. Mathematically speaking, we say that agent’s behavior is described by the agent function that maps any given percept to an action.
Agent function for an artificial agent is implemented by an agent program. It is important to keep the two idea distinct. The agent function is an abstract mathematical description, and the agent program is concrete implementation, running within some physical system.
The job of AI is to design the agent program that implements the agent function—the mapping from perept to actions.
We assume that this program will run on some sort of computing device with physical sensors and actuators. We call this—the architecture.
Agent = Architecture + Program
Obviously, the program we choose has to be one that is appropriate for the architecture. If the program is going to recommend action like ‘Walk’, the architecture should have legs. The architecture might be just an ordinary PC, or it might be a robotic car with several onboard computers, cameras and other sensors.
In general, the architecture makes the percept from the sensors available to the program, and feeds the program’s action choice to the actuators as they are generated.
Good Behavior: Rational Agent
A rational agent is one that does the right thing. But problem is how to define the right thing?
When an agent is plunked down in an environment, it generates a sequence of action according to the percept it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well. This notion is desirability is captured by a performance measure that evaluates any given sequence of environment states.
Notice that in the above definition we say environment states not agent states. If we define success in terms of agent’s opinion of its own performance, an agent could achieve perfect rationality simply by deluding itself that its performance was perfect.
So what is rational at any given time depends on four things:
- The performance measure that defines the criterion of the success.
- The agent’s prior knowledge of the environment.
- The action the agent can perform.
- The agent’s percept sequence to date.
This leads the following definition of rational agent:
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever build-in knowledge the agent has.
How to define the Performance Measure?
There is not one fixed performance measure for all tasks and agents; typically, a designer will devise one appropriate to the circumstance. But as a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave.
An agent is omniscient if it knows the actual outcome of its action and can act accordingly. Achieving omniscience is impossible in reality.
Rationality is not same as omniscience(perfection). Rationality maximize the expected performance, while omniscience maximize the actual performance.
This article is contributed by Ram Kripal. If you like eLgo Academy and would like to contribute, you can mail your article to firstname.lastname@example.org. See your article appearing on the eLgo Academy page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.