Here is a term paper on ‘Artificial Intelligence’. Find paragraphs, long and short term papers on ‘Artificial Intelligence’ especially written for school and college students.

Term Paper # 1. Origin of the Idea of Artificial Intelligence – Turing Test:

The idea of Artificial Intelligence originated from the historic experiment, called Turing Test. This test provides a relatively objective and unambiguous restatement of the question, “Can Machines Think?” in operational language. The British mathematician Alan Turing is one of the founders of computer science and the father of artificial intelligence.

He designed the Automatic Computing Engine (ACE) named in honour of his country man a century earlier, Charles Babbage, who had proposed the Analytical Engine. More than 50 years ago he predicted the advent of “thinking machines”. In his times computers were slow. Turing died in 1954 but left a bench mark test for an intelligent computer; it must fool a person into thinking it (computer) as human. The test he preformed, now known as turing test, was performed in two phases.

In the first phase, the interrogator, isolates himself from the man and woman. Some questions are asked to both of man and woman through a neutral medium, say teletype writer and each party is isolated in separate room to eliminate the visual or audible clues. Questions asked included calculations of multiplication of big numbers and also some questions on lyrics and English literature.

ADVERTISEMENTS:

In the second phase, the man is replaced by a computer without the knowledge of the interrogator. The interrogator cannot distinguish between man and woman, or machine, rather he knows them as A and B.

Paraphrased in terms of intelligence turning test may be stated- If conversation with a computer is indistinguishable from that with a human the computer is displaying “intelligence”. In other words, if we cannot tell the difference between a person (natural intelligence) and a machine (artificial intelligence) they must be the same.

If the interrogator could not distinguish between a man imitating a woman and a computer imitating a man the computer succeeded in passing the test. Or in other words the goal of the machine was to befool the interrogator into believing that it is person. If the machine succeeds at this, then it will be concluded that the machine can think.

To make the test fair, interrogator is allowed to ask any question which comes to mind to try to stump the computer find some query which would distinguish one from the other. The computer is also allowed to do anything it can to mislead the interrogator (including slowing down its responses to questions which it can process more quickly than a human can) and can have any access to any knowledge base available.

ADVERTISEMENTS:

Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent but the interrogator could distinguish between the man and the computer.

Turing’s own statement of beliefs on the outcome of playing the imitation game is quite perceptive and goes in part:

“I believe that in about 50 year’s time, it will be possible to program computers, with a storage capacity of 109 to make them play the game so well that an average interrogation will not have more than 70% chance of making the right identification after 5 minutes of questions” (whether 109 bits, bytes or words is not clear).

………………………….

ADVERTISEMENTS:

………………………….

………………………….

“Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines, thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs”.

Till to date no computer program based even on optical or biological computer has ever succeeded in doing so. If and when does it happen it will open a Pandora’s box of ethical and philosophical questions. After all, if a computer is perceived to be as intelligent as a person what is difference between a smart computer and a human being? Today’s chatbots, a computer program which has a person and a name, chats with you, is incapable of dealing with changes in context or abstract ideas and succeeds only at momentarily tricking people regurgitating pre-programmed answers.

ADVERTISEMENTS:

Recently, Hal, a chain of algorithms, which is being raised as a child and taught to speak through experiential learning in the same way as human children, is being developed in Israel (Indian Express August, 2001), by a neurolinguist, Dr. Anant Treister-Goren.

Hal has fooled child language experts into thinking that he is a toddler with an understanding of about 200 words and a 50 words vocabulary which he uses in short, infantile childish sentences. Dr. Goren talks to Hal and reads him stories in much the same way a mother teaches her young child to learn about colours, food and animals.

The Israeli hi-tech computer company aims over next 10 years to develop Hal into an “adult” computer program which can do what no computer has ever done before, passing the turning test. If this becomes true the distinction between real flesh and blood, old-fashioned and the new kind, will start to blur. But at present AI computers are far less “intelligent” than the human beings.

Term Paper # 2. Languages of Artificial Intelligence:

In the course of their work on the Logic Theorist and GPS, Newell, Simon and Shaw developed their Information Processing Language, or IPL, a computer language tailored for AI programming. At the heart of IPL was a highly flexible data-structure they called a “list”. A list is simply an ordered sequence of items of data. Some or all of the items in a list may themselves be lists. This leads to richly branching structures.

ADVERTISEMENTS:

In 1960 John McCarthy combined elements of IPL with elements of the lambda calculus a powerful logical apparatus dating from 1936—to produce the language that he called LISP (from List Processor). In the U.S. LISP remains the principal language for AI work.

(The lambda calculus itself was invented by Princeton logician Alonzo Church, while investigating the abstract Entscheidungs problem, or decision problem, for predicate logic—the same problem that Turing was attacking when he invented the universal Turing machine.)

The logic programming language PROLOG (from PROgrammation en LOGique) was conceived by Alain Colmerauer at the University of Marseilles, France where the language was first implemented in 1973. PROLOG was further developed by logician Robert Kowalski, a member of the AI group at Edinburgh University.

This language makes use of a powerful theorem-proving technique known as “resolution”, invented in 1963 at the Atomic Energy Commission’s Argonne National Laboratory in Illinois, USA, by the British logician Alan Robinson. PROLOG can determine whether or not a given statement follows logically from other given statements.

For example, given the statement “All logicians are rational” and “Robinson is a logician”, a PROLOG program responds in the affirmative to the query “Robinson is rational?” PROLOG is widely used for AI work, especially in Europe and Japan.

Researchers at the Institute for New Generation Computer Technology in Tokyo have used PROLOG as the basis for sophisticated logic programming languages. These languages are in use on non-numerical parallel computers developed at the Institute. (The languages and the computers are known as “Fifth Generation” software and hardware.)

Having built knowledge base, the AI techniques, based on search and pattern matching, are used to solve any problem.

Term Paper # 3. Practical Impact of Artificial Intelligence:

This aspect can alternatively be looked up on as examples of real word systems based on AI.

They are quite many, a few are summarizes below:

1. Intelligence Distribution Agent (IDA), developed for the U.S. Navy, helps assign sailors new jobs at the end of their tours of duty by negotiating with them via email.

2. Systems which trade stocks and commodities without human intervention.

3. Banking software for approving bank loans and detecting credit card fraud (developed by Fair Isaac Corp).

4. Search engines such as Brain Boost (http://www(dot)brainboost(dot)com/) (or even Google.)

5. Intelligent Agents capable of providing context sensitive help to software system users.

These systems are able to infer the correct level of help needed to provide because they can:

(a) Make inferences about the level of skill of the user and

(b) Utilize deep knowledge about the software application itself.

Using these areas of knowledge it is possible to identify the types of mistakes which users of varying skill levels are likely to make. Novice users who have no conceptual insight into an application tend to make syntactic and semantic mistake, naive users tend to make more semantic mistakes whereas expert users tend to make sematic mistakes i.e., inferring incorrectly that one way of assembling commands to solve a particular problem can be generalized to solve another problem using a comparable sequence of commands.

6. Intelligent help to operators of complex and potentially dangerous industrial process such as nuclear power plants. Human operators of high risk industrial processes have limited attention span and typically perform poorly in situations where cascades of sequential problem sets can result in an in appropriate remedy.

7. “Common sense” reasoning. An ongoing example is the project called CYC. CYC attempts to capture and use knowledge about the world to performing reasoning about specific topics. CYC drives its inferencing capability by using an encyclopediac amount of knowledge about the world.

Its current knowledge base consists of 300,000 concepts, 3,000,000 assertions, and 26,000 relations (07/2008). CYC can further be trained by interaction with humans in the outside world. CYC’s ability to reason can be characterized by taking for instance, a picture of a group of people.

This group of people can be occupationally characterized by their attire. Among the people is an athlete who very evidently has just run a foot race for an extended period of time. CYC can be queried as to which one is wet. CYC can correctly infer that people who physically exert themselves perspire. From that CYC can infer that people who perspire will momentarily be wet. CYC can therefore conclude that it is the athlete who is wet.

Term Paper # 4. Architecture of Artificial Intelligence Machines Intelligent System:

An Intelligent system has four major components. The quality of the result depends on how much knowledge the system possesses. The available knowledge must be represented in a very efficient way. Hence, knowledge representation is a vital component of the system.

It is not merely enough that knowledge is represented efficiently, but the inference process should also be equally good for satisfactory results. Knowledge serves two important functions; to define what can be done to solve a problem and to specify what it means to have solved the problem.

That is called essential knowledge. Another type of knowledge is to advise on how best to go about solving a problem efficiently. Such a knowledge is called meta knowledge. The inference process or control strategy is the second component. It is broadly divided into brute and heuristic search procedures.  

As we have specialized languages and programs for data-processing and scientific applications, we do have to have specialized languages and tools for AI programming. AI languages provide the basic functions for AI programming and tools for the right environment. 

These days through AI systems the trend is computing along several dimension, clearly pointing out the ability of a machine to handle those operations which were earlier used to be handled by the experts in the domains.

This could be possible due to the advancement both in software and computer architecture. The Table 1.2 shows the growth of hardware and software listed in the order of their sophistication. So AI computers (Hardware and software) are the fourth and the most important component of AI systems.

Despite impressive achievements, on the hardware and software fronts it has not been possible to produce co-ordinated autonomous systems which possess some of the basic abilities of a three years old child.

However, the idea that computers can ‘think’ is very intriguing. What would be the world like if we had intelligent machines? At present human intelligence ‘Brain Power’ is being envied by the intelligent computers.

Term Paper # 5. Future of Artificial Intelligence:

Cyc is a 22 years old project based on symbolic reasoning with the aim of amassing general knowledge and acquiring common sense. The volume of knowledge it has accumulated makes it able to learn new things by itself. Cyc will converse with Internet users and acquire new knowledge from them.

These projects are unlikely to directly lead to the creation of AI, but can be helpful when teaching the artificial intelligence about English language and the human- world domain.

In the next 10 years technologies in narrow fields such as speech recognition will continue to improve and will reach human levels. In 10 years AI will be able to communicate with humans in unstructured English using text or voice, navigate (not perfectly) in an unprepared environment and will have some rudimentary common sense (and domain-specific intelligence).

Some parts of the human (animal) brain in silicon will be created. The feasibility of this is demonstrated by tentative hippocampus experiments in rats.

There will be an increasing number of practical applications based on digitally recreated aspects human intelligence, such as cognition, perception, rehearsal learning, or learning by repetitive practice.

The development of meaningful artificial intelligence will require that machines acquire some variant of human consciousness. Systems which do not possess self-awareness and capable of feeling will at best always be very brittle.

Without these uniquely human characteristics, truly useful and powerful assistants will remain a goal to achieve. To be sure, advances in hardware, storage, parallel processing architectures will enable ever greater leaps in functionality.

But these systems will remain mechanistic soulless corpse. Systems which are able to demonstrate conclusively that they exhibit self awareness, language skills, surface, shallow and deep knowledge about the world around them and their role within it will be needed for going forward.

Even these days automatic speech recognition softwares are available which are having vocabulary size of 10,000 in 10 languages which are efficient in continuous voice and speech recognition. Even those softwares have been developed which can affect language translation, emotion detection, requiring zero user training with human level accuracy. However, emotion emulation has not yet been achieved.

However, the field of artificial consciousness remains in its infancy. The early years of the 21st century should see dramatic strides forward in this area however.

During the 2010’s new services can be foreseen to arise which will utilize large and very large arrays of processors. They will be architected to form parallel processing ensembles. They will allow for reconfigurable topologies such as nearest neighbour based meshes, rings or trees. They will be available via an Internet of WIFI connection (A user will have access to systems whose power will rival that of governments in the 1980’s or 1990’s.) Because of the nature of nearest neighbour topology higher dimension hypercubes (e.g., D10 to D20), can be assembled on an ad-hoc basis as necessary.

A D10 ensemble, i.e., 1024 processors, is well within the grasp of today’s technology. A D20, i.e. 2,097, 152 processors is well within the reach of an ISP or a processor provider. Enterprising concerns will make these systems available using business models comparable to contracting with an ISP to have web space for a web site. Application specific ensembles will gain early popularity because they will offer well defined and understood application software which can be recursively configured onto larger and larger ensembles.

These larger ensembles will allow for increasingly fine grained computational modeling of real world problem domains. Over time, market awareness and sophistication will grow. With this growth will come the increasing need for more dedicated and specific types of computing ensembles.

Through the ongoing success of applied Artificial Intelligence and of cognitive simulation seems assured. However, strong AI, which aims to duplicate human intellectual abilities, remains controversial. The reputation of this area of research has been damaged over the years by exaggerated claims of success which have appeared both in the popular media and in the professional journals. At the present time, even an embodied system displaying the overall intelligence of a cockroach is proving elusive, let alone a system rivaling a human being.

The difficulty of “scaling up” AI so far relatively modest achievements cannot be overstated. Five decades of research in symbolic AI has failed to produce any firm evidence which a symbol-system can manifest human levels of general intelligence. Critics of nouvelle AI regard as mystical the view that high-level behaviours involving language-understanding, planning, and reasoning will somehow “emerge” from the interaction of basic behaviours like obstacle avoidance, gaze control and object manipulation.

Connectionists have been unable to construct working models of the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons, whose pattern of interconnections is perfectly known. Yet connectionist models have failed to mimic the worm’s simple nervous system. The “neurons” of connectionist theory are gross oversimplifications of the real thing.

However, this lack of substantial progress may simply be testimony to the difficulty of strong AI but not to its impossibility.

The dream of building a computer which carefully duplicates the human brain will probably be not realized in the fore-seeable future because it is difficult to represent moods, imagination, emotions, creativity etc. However, studies are going on in philosophy, psychology, linguistics, neuroscience and computer science to achieve this end.

Thus, artificial intelligence is an interdisciplinary field and the small introductory survey concludes that even if we cannot simulate human brain so far, research has already begun in this direction through artificial neutral networks.