Here is a compilation of essays on ‘Artificial Intelligence’ for class 11 and 12. Find paragraphs, long and short essays on ‘Artificial Intelligence’ especially written for school and college students.

Essay # 1. Introduction to Artificial Intelligence:

Artificial Intelligence (AI), a branch of computer science is concerned with the design of intelligence in an artificial device. The term was coined by McCarthy in 1956.

This definition of AI contains two terms:

Intelligence and Artificial Device. So let us explain these two terms.

ADVERTISEMENTS:

Intelligence:

Is it which characterizes human or Is there an absolute standard of judgement?

Accordingly there are two possibilities:

A system with intelligence is expected to behave as intelligently as a human being. Secondly, a system with intelligence is expected to behave in the best possible manner. What type of behaviour are we talking about?

ADVERTISEMENTS:

i. Are we looking at the thought process or reasoning ability of the system?

Or

Are we interested in the final manifestations of the system in terms of its actions?

Further, quite simple behaviour can be intelligent yet a quite complex behaviour performed by the insects is unintelligent. What is the difference? Consider the behaviour of the digger wasp spex inhneumonous. The mother wasp brings food (paralyzed insect) to her burrow. She deposits it on the threshold, goes inside the burrow to check for intruders and then if the coast is clear carries in the food.

ADVERTISEMENTS:

The Unintelligent nature of the wasp’s behaviour is revealed if the watching experimenter moves the food, a few inches while the female wasp is inside. On emerging, the wasp repeats the whole procedure. She carries the food to the threshold once again, goes in to look around, emerges. She can be made to repeat this cycle of behaviour upto forty times or even more. This conspicuous absence of intelligence in the case of sphex-is the ability to adapt one’s behaviour to fit new circumstances. Mainstream thinking in psychology regards human intelligence not as a single ability or cognitive process but rather as an array of several components such as reasoning, problem solving, perception etc.

Given this scenario different interpretations have been offered by different researches in defining the scope and definitions of AI.

One view is that AI is about designing systems which are as intelligent as humans i.e., understanding human thought. That is building machines which emulate human thinking process. This is cognitive science approach.

The second approach is best embodied by the concept of Turing Test. Turing held that in future, computers can be programmed to acquire abilities rivaling the human intelligence. This view is more vociferous. In view of this inter predation of AI let us compare the process of working of human mind and that of computers so as to design thinking machine.

ADVERTISEMENTS:

Man Vs Computers:

Different definitions of AI propounded by the workers in the field are based on a sharp understanding of vital difference which exists between man and the computer which is merely a machine. However, an interesting feature about AI is that it covers those operations through which computers are made to do things which at the moment are done by the humans. This may look surprising to the layman because to him the computers appear to take over activities which belong to the realm of human beings and are beyond the scope of mere machines.

People have traditionally outperformed computers in activities which involve intelligence. We do not just process information, we understand it, make sense out of what we see and hear and then come out with new ideas.

We use common sense to make our way through a world which seemingly sometimes appears highly illogical.

ADVERTISEMENTS:

Common sense knowledge includes knowing what we know vaguely as well as what we know clearly. For example, if we were asked to recall that phone number of our college or of our good friend, we would search our memory, trying to retrieve information.

But if we were asked to give the phone number of India’s Prime Minister we would not know the answer and do not even try a retrieval. Now, if we were asked the phone number of Tulsi Das (writer of epic The Ramayana), we would know at once that no answer exists since telephones were not around in the Tulsi Das’s time.

If people are more intelligent than computers and if AI tries to improve the performance of computers in activities which people do better, then the goal of AI is to make computers more ‘intelligent’.

So, a definition of AI can be:

AI is the part of computer science concerned with designing intelligent computer systems that is systems which exhibit the characteristics we associate with intelligence in human behaviour.

Continuing with the debate, why is the term ‘intelligence’ reserved for humans and why are the computers not considered to be intelligent Winston (1984) has remarked that “Defining intelligence usually takes a semester-long struggle and even that I am not sure ever to get a definition really nailed down.” He has further remarked, “since the exact definition of intelligence has proven to be extremely elusive, the following is the partial list of characteristics intelligence should possess”.

1. To respond to situations very flexibly. You do not necessarily respond the same way each time you face an identical problem. If you did, you would be exhibiting mechanical rather than intelligent behaviour.

2. Make sense out of ambiguous or contradictory messages. You can understand ambiguous/contradictory statements based on your knowledge and experience and placing the situation in context.

3. To attach relative importance to different elements of a situation. You face so many problems you do them according to level of importance.

4. To find similarities between situations despite differences which may separate them. By making use of past experiences you can solve future problems, though, the two situations should not be exactly similar.

5. To draw distinctions between situations despite similarities which may link them. The two situations may look similar on the surface, yet you are able to note the difference and hence adjust your reaction accordingly.

All these abilities though come under common sense, yet their abilities cannot be imbibed or simulated by the computer.

Now consider some activities such as:

1. What did you eat, in a friend’s marriage? You cannot enlist the mental steps required to remember what you are in the marriage.

2. What are muscular contractions necessary to pick up a cup of tea?

3. Can we describe the processes of reading and understanding a book?

The research done by cognitive scientists helps to explain the workings of human intelligence. This, in turn, has helped workers in the field of AI to simulate that intelligence on a computer.

Workers in AI used many different techniques to make computers more intelligent:

1. One commonly used technique is to determine the process used by humans to produce a particular type of intelligent behaviour and then to simulate that process on a computer.

2. The other technique which is used by cognitive scientists is to determine those processes which produce human intelligence in a given situation. These processes may be programmed, in an attempt to simulate that behaviour. This AI technique is called Modeling or Simulation. (In fact, a model of intelligent human behaviour is an effort to simulate that behaviour on a computer to determine if the computer will exhibit the same intelligent behaviour as does a human).

As shown in Fig. 1.1 the link between cognitive science and computer modeling is a continuous process. Cognitive scientists develop theories of human intelligence which are programmed in to computer models by AI researchers, the computer models are then used to test the validity of these theories. The feedback from the computer models allows the cognitive scientists to refine their theories, which can be used to implement better models; the process is of course iterative.

Naturally, the question arises about the importance of the processes required to simulate human intelligence? Is it the goal of AI to simulate intelligent behaviour with a computer (by any means) or is it truly AI only if we simulate intelligence by using the same techniques as a human?

There is a difference of opinion in AI community about this issue. Some scientists believe that the goal of AI is simply to simulate intelligence behaviour on a computer, (using any technique which proves to be effective). Others claim that it is not AI when we simulate intelligence using procedures other than those which might be used by humans.

For example, we want to programme a computer to play chess. We could write the program to imitate the thought process of a human chess expert and this is AI or we might write the program to consider the relative merits of 10,000 different moves before making a move, though no human expert would ever evaluate so many possibilities.

Though, the two alternatives played chess, there would still be disagreement if the two alternative scan ever be categorized as AI. Some scientists would say that the second program was exhibiting intelligent behaviour. So it is AI, others would say that the program is not AI because it uses techniques which are not representative of intelligent human though processes.

AI is that branch of computer science which deals with symbolic, non-algorithmic methods of problem solving.

(i) Numeric Vs Symbolic:

Computers were initially designed to process numbers. Consistent researches have shown that people think symbolically rather than numerically and human intelligence is partially based on our mental ability to manipulate symbols, rather than just numbers.

(ii) Algorithmic Vs Non-Algorithmic

An algorithm is a step-by-step procedure with well-defined starting and ending points, which is guaranteed to reach a solution to a specific problem. Computer architecture readily lends itself to this step-by-step approach since the conventional computer programs are based on algorithms. However, most of the human processes, tend to be non-algorithmic, i.e., our mental activities consist of more than just following logical, step-by-step procedures.

AI research continues to be devoted to symbolic and non-algorithmic processing techniques in an attempt to emulate more closely human reasoning processes by a computer.

(iii) Heuristics:

A definition of AI is based on another key parameter of AI computing i.e. heuristics. A heuristic is a rule of thumb which helps to determine how to proceed.

AI is the branch of computer science which deals with ways of representing knowledge using symbols rather than numbers and rules of thumb or heuristic methods for processing information. 

Pattern Matching:

Another definition of AI focuses on pattern matching techniques; in simplified terms, “Artificial Intelligence works with pattern matching methods which attempt to describe objects, events or processes in terms of their qualitative features and logical and computational relationships”.

Newspaper photographs are nothing more than collections of minute dots. Yet without any conscious effort, we discover patterns which reveal faces and other objects in those photos. Similarly, we make sense of the world by recognizing the relationships and patterns which help giving meaning to the objects and events which we encounter.

If computers are to become more intelligent, they must be able to make the same kinds of associations between the qualities of objects, events, and processes which come so naturally to humans.

Essay # 2. Categories of Artificial Intelligence:

Research in AI divides into three categories:

1. “Strong” AI,

2. Applied AI and

3. Cognitive Simulation.

1. Strong AI:

Strong AI aims to build machines whose overall intellectual ability is indistinguishable from that of a human being.

Joseph Weizenbaum, of the MIT AI Laboratory, has described the ultimate goal of strong AI as being “nothing less than to build a machine on the model of man, a robot which is to have its childhood, to learn language as a child does, to gain its knowledge of the world by sensing the world through its own organs and ultimately to contemplate the whole domain of human thought”.

The term “strong AI”, now in wide use, was introduced for this category of AI research in 1980 by the philosopher John Searle, of the University of California at Berkeley.

Some believe that work in strong AI has caught the attention of the media, but by no means all AI researchers view strong AI as worth pursuing. Excessive optimism in the 1950s and 1960s concerning strong AI has given way to an appreciation of the extreme difficulty of the problem, which is possibly the hardest that science has ever undertaken. To date, progress has been meagre. Some critics doubt whether research in the next few decades will produce even a system with the overall intellectual ability of an ant.

2. Applied AI:

Applied AI, also known as advanced information-processing, aims to produce commercially viable “smart” systems—such as, for example, a security system which is able to recognise the faces of people who are permitted to enter a particular building. Applied AI has already enjoyed considerable success.

3. Cognitive Simulation: 

In Cognitive Simulation (CS), computers are used to test theories about how the human mind works —for example, theories about how we recognise faces and other objects, or about how we solve abstract problems (such as the “missionaries and cannibals”).

The theory which is to be tested is expressed in the form of a computer program and the program’s performance at the task-e.g., face recognition-is compared to that of a human being. Computer simulations of networks of neurons have contributed both to psychology and to neurophysiology. The program Parry was written in order to test a particular theory concerning the nature of paranoia.

Researchers in cognitive psychology typically view Computer Science as a powerful tool.

Essay # 3. History of Artificial Intelligence:

Artificial intelligence does not have a long and distinguished history. Most of the basic ideas and perhaps even more importantly the basic sets of preconceptions which make up AI can be traced back only to the middle years of 20th Century. The idea of creating an artificial intelligence is an old one, though it sprang into full bloom after 1950, with the idea of Turing test.

In medieval Europe, Pope Sylvester II is credited with building a talking head with a limited vocabulary and a knock for prognostication Sylvester would ask it a simple question about the future, and the artificial head would answer yes or no. Arab astrologers are said to have constructed a thinking machine called Zaira.

In the early 16th century the Czech rabi Judah bin Loew is reported to have sculpted a living clay man, Joseph Golem, to spy on the gentiles of Prague (golem has become a synonym for an artificial man). Unfortunately, particular golem grew overly aggressive and had to be dismantled.

In Mary Shelley’s Frankenstein, perhaps the classic horror story Dr. Victor Frankenstein created a humanoid who turned into the archetypical monster and became a murderer.

HAL, the lethal computer in Arthur Clarke’s 2001 was endowed with somewhat over-developed instincts for self-preservation. Interestingly, HAL exhibited features which are currently the subjects of AI research. He performed speech recognition and natural language processing, he was capable of making intelligent decisions, and he was designed to assist humans in the operation of a space vehicle.

It is reported that his computer vision also was exceptionally well developed. If allowed him to read the lips of the crew it learnt that they were planning to disconnect him. HAL was eventually turned off, but not before he killed one of the crew in an attempt to remain “alive”.

Of course, not all of the smart machines in modern fiction are villians. Two of the lead characters in star wars movie, triology, were machines and quite intelligent. One of them, C-3PO was a humanoid robot, his faithful companion, R2D2, while not humanoid in form possessed an unmistakable human-like intelligence.

The Proprio Foot (Times of India 2006) comes closer to mimicking real foot action than any other prosthetic because it flexes at the ankle, using AI and a muscle-mimicking motor Sensors in the ankle sample movement more than 1,000 times a second to determine walking speed and transmit the data to the on board computer which signals to move the foot to match the other foot’s pace.

Several logicians and philosophers have laboured to formalize and ultimately to mechanize, the “laws of thought”. One of the classic names here is that of George Boole (1815-64) who invented Boolean algebra. But his concerns were, more in the direction of Artificial intelligence. The book, upon which his fame is based is; An investigation of the laws of thought on which are founded the Mathematical Theories of Logic and Probabilities.

It was in the twentieth century that the intellectual tools (logic and study of formalism) and the physical devices from vacuum tubes to transistors, to Ics, became available. Alan Turing (1912-1954), was the first person to see clearly the possibility of computer intelligence. He worked with the precursors of modern computers during the World War-II.

After the war in 1950, he published his famous article computing machinery and intelligence in which he put forth the idea that a computer can be programmed so as to exhibit intelligent behaviour. He also examined and rejected, arguments as to the impossibility of artificial intelligence. But probably the most famous contribution of the article is Turing test, said in his honour, though he called it imitation game.

Another influential paper was by Claude Shanon on the possibility of computers playing chess. But 1956 is taken as the start of A.I., since an historic two months conference on AI was arranged by young assistant professor of Mathematics at Dartmouth College, New Hampshire, USA, John McCarthy along with his friend at M.I.T., Marvin Minsky.

The term artificial intelligence got birth for the first time during this conference; McCarthy seems to have invented the term, although it is not sure whether he coined it or heard from someone else. This conference was also the first meeting of four giants who led the paradigm of AI in USA for the next twenty years; McCarthy and Minsky at M.I.T. and Allen Newell and Herbert Simon at Carnegie Institute of Technology, (Now Cargenegie-Mellon University, USA).

The chronological history of AI can be categorized into three generations:

1. First Generation Pre-1950 (Classic Period):

Warren McCalloch majored in philosophy and went for medical degree from Columbia University. He began his research on epilepsy, head injuries and the central nervous system. He, along with an eighteen years old mathematician, Walter Pitt, published in 1943, neural net model which consisted of a network of synapses and neurons which they postulated to behave in a binary fashion; either firing or not firing. They showed that their neural net model was essentially equivalent to the Turing machine.

This model, with considerable refinements, continues to serve as the corner stone of research on neural network computers. The neural net model stimulated considerable theoretical interest and experimental investigations in attempts to model the behaviour of the brain in laboratory.

Later experts showed that the model was fundamentally wrong in the assumption that the neurons behaved strictly digital, in fact they are highly non-linear devices which exhibit both digital and analog characteristics.

The third founding father of computer science which made the science of AI possible was, Hungarian born-Princeton (USA) mathematician, John Von Neumann. During World War-II he worked on a project of calculating the propagation of shock waves with the help of desk calculators and designed electronic disk variable calculator EDVAC, and thus got the credit of inventing the idea of stored program. His contributions to development of computer were brilliant and indisputable though his contribution to AI is less significant. He coined the anthromorphic term, memory, which is still accepted.

The final first generation patriarch of AI is Claude Shanon. His master’s thesis (1947) has drawn the connection between the switching operations of electro-magnetic relays and the Boolean algebra on which modern digital logic is based. Shanon shared Alan Turing’s conviction on the possibility of machines thinking.

He published an article Programming a Computer for Playing Chess in Philosophical Magazine in 1950 in which he pointed out that a typical chess game involved about 10120 possible moves. Even if newly invented computer could examine one move per microsecond, it would require 1095 years to make its first move. In his another paper “Computers and Automata” published in 1953, Shanon raised a number of provocative questions which AI researches have been addressing to ever since.

These include:

1. Can a self-repairing machine be built which will locate and repair faults in its own components?

2. Can a digital computer be programmed to program itself?

3. Using hierachical concepts can a computer be programmed to learn.

Early work on AI can be found in works of Mc Corduck (1979). “Machines who think” and of Nowell and Simon 1972. “Human problem solving.”

2. Second Generation 1950-1970 (Romantic Period):

In 1950s first artificial intelligence laboratories were established at Carnegie-Mellon University and MIT. Early successes created a sense of optimism and false hopes that some kind of grand unified theory of mind would soon emerge and make general AI possible.

Among the most note able was Newell and Simon’s program called the “General Problem Solver”. The title of the program itself captures the optimism of the period. Many in the field believed that a truly useful thinking machine was only a decade away.

The promise of artificial intelligence was summed up in the classic 1968 movie 2001: A Space Odyssey (http://www(dot)imdb(dot)com/title/tt0062622/) featuring artificially intelligent computer HAL 9000.

In the 1970s several failures (in machine translation and elsewhere) revealed two fundamental limits to artificial intelligence that hadn’t been fully appreciated in the 1950s and 60s. They are most commonly known by the titles “Intractability” and “The Common Sense Knowledge Problem”.

The first puts (intractability) absolute limits on the complexity of a problems which can be solved by using strict logic and exhaustive searches. The second limits how successful a program can be at understanding anything without having access to the enormous amount of common sense knowledge which the average person collects in an average childhood.

To combat the first problem, intractability, new paradigms were introduced which did not solve problems strictly symbolically, but instead used “scruffy” methods which were less shoby controlled and more self directed, most notably connectionism.

To combat the Common Sense problem, some AI researchers focused on narrowing the domain of problems to a single area of expertise. These programs were called “Expert Systems”. At least one researcher, Douglas Lenat, is attempting to defeat the Common Sense problem directly by building a useable database of common sense knowledge called CYC.

Another major contributor during this period is Seymour Papert, a South African mathematician (worked with child specialist, Jean Piaget) and author of LOGO, a high level language designed especially for teaching concept of logical thinking to the children.

This period can perhaps be characterized by the following two observations:

(i) Many approaches to AI were tried and proven unsuccessful.

(ii) These second generation researches trained many excellent students together with whom they were to score many successes, sketched in the next generation.

3. Third Generation 1970-Present:

AI began to have a commercial impact in the 1980s. In 1982 following the recommendations of technology foresight exercises, Japan’s Ministry of International Trade and Industry initiated the Fifth Generation Computer Systems project to develop massively parallel computers which would take computing and AI to a new level. The United States responded with a DARPA-led project which involved large corporations, such as Kodak and Motorola.

But despite these significant results, the grand promises failed to materialise and the public started to see AI as failing to live up to its potential. This culminated in the “AI winter” of the 1990s, when the term AI itself fell out of favour, funding decreased and the interest in the field temporarily dropped. Researchers concentrated on more focused goals, such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.

However, computer power has increased exponentially since the 60s and with every increase in power AI programs have been able to tackle new problems using old methods with great success. AI has contributed to the state of the art in many areas, for example speech recognition, machine translation and robotics.

The realization that computers could display intelligent behaviour if the domain in which they operated, were sufficient restricted, is perhaps the most significant discovery during this period.

This concept, combined with a shift to knowledge- based reasoning enabled 3rd generation AI workers to make some very impressive progress in designing intelligent machines a few progressive achievements were: Terry Winograd (1978) wrote one of the most successful programs, SHRDLU (1972), for the manipulation of blocks and objects on the very restricted domain of a table top. Bertram Raphael built one of the first robots, SHAKEY, to respond to human instructions.

Fikes and Nillson (1993) developed STRIPS, program to achieve goals by the use of plans and a sequence of operators. Raj Reddy (1973) created HEARSAY system which could ‘understand’ human speech with better than 90% accuracy. Daniel Bobrow (1975) wrote the program, STUDENT, for solving simple algebraic world problems.

David Slate and Larry Atkin (1977) wrote a number of chess playing programs, one of which, chess 4.5 played nearly even with one of the world’s chess experts. Roger Schank and Robert Abelson (1977) introduced the concept of SCRIPTS in an attempt to model ‘common sense’ behaviour in routine situations.

Edvard Shortliffe (1976) wrote the MYCIN expert system for diagnosing infectious bacteriological diseases. Richord O Duda (1979) wrote PROSPECTOR, a geological analysis program which as successfully discovered a molybdenum deposit.

To summarize, this period includes research on book theories and practical aspects of AI. Besides real time problem researchers are also engaged in theoretical research on AI including hauristic search, uncertainly, modeling and non-menotonic and spateo temporal reasons.