Here is a term paper on ‘Neural Network’. Find paragraphs, long and short term papers on ‘Neural Network’ especially written for school and college students.

Term Paper # 1. Introduction to Neural Network:

Work on artificial neural networks, commonly referred to as neural network, has been motivated by the recognition that the human brain computes in an entirely different way from the conventional digital computer. The brain is a highly complex, non-linear and parallel computer (information-processing system).

It has the capability to organise its structural constituents, known as neurons, so as to perform certain computations (like pattern recognition, perception, motor control etc.) many times faster than the fastest digital computer in existence today. Consider, for example, human vision, which is an information-processing task.

It is the function of the visual system to provide a representation of the environment around us and, more important, to supply the information we need to interact with the environment. To be specific, the brain routinely accomplishes perceptual recognition tasks (for example, recognizing a familiar face embedded in an unfamiliar scene) in approximately 100-200 ms., whereas tasks of much lesser complexity may take days on a conventional computer.

ADVERTISEMENTS:

For another example, consider the sonar of a bat. Sonar is an active echo-location system. In addition to providing information about how far away a target (say, a flying insect) is, a bat sonar conveys information about the relative velocity of the target, the size of the target, the size of various features of the target and the azimuth and elevation of the target.

The complex neural computations needed to extract all this information from the target echo occur within a brain of the size of a plum. Indeed, an echo-locating bat can pursue and capture its target with a facility and success rate which would be the envy of a radar or sonar engineer.

How, then does a human brain or the brain of a bat do it? At birth, a brain has great structure and the ability to build up its own rules through we usually refer to as “experience”. In deed experience is built up overtime, with the most, dramatic development i.e., hard-wiring of the human brain taking place during the first two years from birth through development continues well beyond that stage what we usually refer to as experience. Indeed experience is built up over time, with the most dramatic development, (which is hard-wiring) of the human brain) taking place during the first two years from birth; the, development continues well beyond that stage.

Term Paper # 2. Meaning of Neural Network:

A developing neuron is synonymous with a plastic brain: Plasticity permits the developing nervous system to adapt to its surrounding environment. Just as plasticity appears to be essential to the functioning of neurons as information-processing units in the human brain, so it is with neural networks made up of artificial neurons.

ADVERTISEMENTS:

In its most general form, a neural network is a machine which is designed to model the way in which the brain performs a particular task or function of interest; the network is usually implemented by using electronic components or is simulated in software on a digital computer. To achieve good performance, neural networks employ a massive interconnection of simple computing cells referred to as ‘neurons’ or ‘processing units’.

We may thus offer the following definitions of a neural and a neural network (viewed as an adaptive machine.):

1. A neuron is a cell in the brain whose principal function is the collection, processing and dissemination of electrical signals.

2. A neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experiential knowledge and making it available for use.

ADVERTISEMENTS:

It resembles the brain in two respects:

1. Knowledge is acquired by the network from its environment through a learning process.

2. Interneuron connection strengths, know as synaptic weights, are used to store the acquired knowledge.

The brain’s information-processing capacity is thought to emerge primarily from neural networks. For this reason, some of the earliest A I works were aimed at creating artificial neural networks.

ADVERTISEMENTS:

The procedure used to perform the learning process is called a learning algorithm, the function of which is to modify the synaptic weights of the network in an orderly fashion to attain a desired design objective.

The modification of synaptic weights provides the traditional method for the design of neural network. Such an., approach is the closest to linear adaptive filter theory, which is already well established and successfully applied in many diverse fields. However, it is also possible for a neural network to modify its own topology which is motivated by the fact that neurons in the human brain can die and that new synaptic connections can grow.

Neural networks are also referred to in literature as neuro-computers, connectionist networks, parallel distributed processors, etc.

Term Paper # 3. Characteristics of Neural Network:

It is apparent that a neural network derives its computing power through, first, its massively parallel distributed structure and, second, its ability to learn and therefore generalise. Generalisation refers to the neural network producing reasonable outputs for its inputs not encountered during training (learning).

ADVERTISEMENTS:

These two information-processing capabilities make it possible for neural’ networks to solve complex (large-scale) problems which are currently intractable. In practice, however, neural networks cannot provide the solution by working individually. Rather, they need to be integrated into a consistent system.

Specifically, a complex problem of interest is decomposed into a number of relatively simple tasks, and neural networks are assigned a subset of the tasks which match their inherent capabilities. It is important to recognise, however that we have a long way to go (if ever) before we can build a computer architecture which mimics a human brain.

Neural networks offer the following useful properties and capabilities. The terms and concepts used here are explained at appropriate places.

1. Non-Linearity:

An artificial neuron can be linear or non-linear. A neural network, made up of an interconnection of nonlinear neurons, is itself non­linear. Moreover, the non-linearity is of a special kind in the sense that it is distributed throughout the network. Non-linearity is a highly important property, particular if the underlying physical mechanism responsible for generation of the input signal (as in speech signal) is inherently nonlinear.

2. Input-Output Mapping:

A popular paradigm of learning called learning with a teacher or supervised learning involves modification of the synaptic weights of a neural network by applying a set of labeled training samples or task examples. Each example consists of a unique input signal and a corresponding desired response.

The network is presented with an example picked at random from the set, and the synaptic weights (free parameters) of the network are modified to minimise the difference, between the desired response and the actual response of the network produced by the input signal in accordance with an appropriate statistical criterion.

The training of the network repeated for many examples in the set until the network reach a steady state where there are no further significant changes in the synaptic weights. The previously applied training examples may be reapplied during the training session but in a different order.

Thus, the network learns from the examples by construction, an input-output mapping for the problem at hand. Such an approach brings to mind the study of non-parametric statistical inference, which is a branch of statistics dealing with model-free estimation, or from a biological viewpoint, tabula rasa learning.

The term “non-parametric” is used here to signify the fact that no prior assumptions are made on a statistical model for the input data. Consider, for example a pattern classification task, where the requirement is to assign an input signal representing a physical object or event to one of several per specified categories (classes).

In a non-parametric approach to this problem, the requirement is to ‘estimate’ arbitrary decision boundaries in the input signal space for the pattern-classification task using a set of examples, and to do so without invoking a probabilistic distribution model. A similar point of view is implicit in the supervised learning paradigm, which suggests a close analogy between the input-output mapping performed by a neural network and non-parametric statistical inference.

3. Adaptivity:

Neural networks have a built-in capability to adapt their synaptic weights to changes in the surrounding environment. In particular, a neural network trained to operate in a specific environment can be easily retrained to deal with minor changes in the operating ‘environment conditions. Moreover, when it is operating in a non-stationary environment (that is one where statistics change with time), a neural network can be designed to change its synaptic weights in real time.

The natural architecture of a neural network for pattern classification, signal processing, and control applications, coupled with—the adaptive capability of the network, make it a useful tool in adaptive pattern classification, adaptive signal processing, and adaptive control. As a general rule, we may say that the more adaptive we make a system, all the time ensuring that the system remains stable, the more robust its performance will likely be when the system is required to operate in a non-stationary environment.

It should, be emphasised, however, that adaptivity does not always lead to robustness: indeed, it may do the very opposite. For example, an adaptive system with short time constants may change rapidly and therefore tend to respond to spurious disturbances, causing a drastic degradation in system performance.

To realise the full benefits of adaptivity, the principal time constants of the system should be long enough for the system to ignore spurious disturbances and yet short enough to respond’ to meaningful changes in the environment; the problem sometime is referred to as the stability -plasticity dilemma.

4. Evidential Response:

In the context of pattern classification, a neural network can be designed to provide information not only Artificial Intelligence about which particular pattern to select, but also about the confidence in the decision made. This latter information may be used to reject ambiguous patterns, should they arise, and thereby improve the classification performance of the network.

5. Contextual Information:

Knowledge is represented by the very structure and activation state of a neural network. Every neuron in the network is potentially affected by the global activity of all other neurons in the network. Consequently, contextual information is dealt with naturally by a neural network.

6. Fault Tolerance:

A neural network, implemented in hardware from, has the potential to be inherently fault tolerant, or capable of robust computation, in the sense that its performance degrades gracefully under adverse operating conditions. For example, if a neuron or its connecting links are damaged, recall of a stored pattern is impaired in quality.

However, due to the distributed nature of information stored in the network, the damage has to be extensive before the overall response of the network is degraded seriously. Thus, in principle, a neural network exhibits a graceful degradation in performance rather than catastrophic failure.

There is some empirical evidence for robust computation, but usually it is uncontrolled. In order to be assured that the neural network is in fact fault tolerant, it may be necessary to take corrective measures in designing the algorithm used to train the network.

7. VLSI Implement Ability:

The massively parallel nature of a neural network makes it potentially fast for the computation of certain tasks. This same feature makes a neural network well suited for implementation using very-large-scale- integrated (VLSI) technology. One particular beneficial virtue of VLSI is that it provides a means of capturing truly complex behaviour in a highly, hierarchical fashion.

8. Uniformity of Analysis and Design:

Basically, neural networks enjoy uni­versality as information processors. We say this in the sense that the same notation is used in all domains involving the application of neural networks.

This feature manifests itself in different ways:

I. Neurons in one form or another, represent an ingredient common to all neural networks.

II. This commonality makes it possible to share theories and learning algorithms in different applications of neural networks.

III. Modular networks can be built through a seamless integration of modules. 

9. Neurobiological Analogy:

The design of a neural network is motivated by analogy with the brain: which is a living proof that fault tolerant parallel processing is not only physically possible but also fast and powerful. Neurobiologists look to (artificial) neural networks as a research tool for the interpretation of neurobiological phenomena. On the other hand, engineers look to neurobiology for new ideas to solve problems more complex than those based on conventional hard-wired design techniques.

The neurobiological analogy, exemplified by neuromorphic integrated circuits is useful in another important way. It provides a hope and belief and to a certain extent an existence of proof, that physical understanding of neurobiological structures could have a productive influence on the art of electronics and VLSI technology.

With inspiration from neurobiology in mind, it seems appropriate that we take a brief look at the human brain and its structural levels of organisation.

Term Paper # 4. Neural Networks Viewed As Directed Graphs:

The block diagram of Fig. 11.4., or that of Fig. 11.6, provides a functional description of the various elements which constitute the model of an artificial neuron. We may simplify the appearance of the model by using the idea of signal-flow graphs without sacrificing any of the functional details of the model. Signal-flow graph with a well- defined set of rules were originally developed by Mason (1953, 1956) for linear networks.

The presence of nonlinearity in the model of a neuron limits the scope of their application to neural networks. Nevertheless, signal-flow graphs do provide a neat method for the portrayal of the flow of signals in a neural network.

A signal-flow graph is a network of directed links (branches) which are interconnected at certain points called nodes. A typical node j has an associated node signal vj. A typical directed link originates at node j and terminates on node k; it has an associated transfer function or transmittance which specifies the manner in which the signal yk at node k depends on the signal vj at node j. The flow of signals in the various parts of the graph is dictated by three basic rules.

Rule 1:

A signal flows along a link only in the direction defined by the arrow on the link.

Two different types of links may be distinguished:

I. Synaptic links, whose behaviour is governed by a linear input-output relation. Specifically, the node signal xj is multiplied by the synaptic weight wki to produce the node signal yk, as illustrated in Fig. 11.9a.

II. Activation links, whose behaviour is governed in general by a non­linear input-output relation. This form of relationship is illustrated in Fig. 11.9b, where ϕ(.) is the non-linear activation function.

Rule 2:

A node signal equals the algebraic sum of all signals entering the pertinent node via the incoming links.

This second rule is illustrated in Fig. 11.9c., for the case of synaptic convergence or fan-in.

Rule 3:

The signal at a node is transmitted to each outgoing link originating from that node, with the transmission being entirely independent of the transfer functions of the outgoing links.

This third rule is illustrated in Fig. 11.9 d, for the case of synaptic divergence or fan-out.

For example, using these rules we may construct the signal-flow graph of Fig. 11.10, as the model of a neuron, corresponding to the block diagram of Fig. 11.6. The representation shown in Fig. 11.10, is clearly simpler in appearance than that of Fig. 11.6, yet it contains all the functional details depicted in the latter diagram. In both figures, the input x0 = + 1 and the associated synaptic weight wk0 = bk, where bk is the bias applied to neuron k.

Indeed, based on the signal-flow graph of Fig. 11.10., as the model of a neuron, we may now offer the following mathematical definition of a neural network.

A neural network is a directed graph consisting of nodes with interconnecting synaptic and activation links, and is characterised by four properties:

1. Each neuron is represented by a set of linear synaptic links, an externally applied bias, and a possibly non-linear activation link. The bias is represented by a synaptic link connected to an input fixed at +1.

2. The synaptic links of a neuron weigh their respective input signals.

3. The weighted sum of the input signals defines the induced local field of the neuron in question.

4. The activation link squashes the induced local field of the neuron to produce an output.

The state of the neuron may be defined in terms of its induced local field or its output signal.

A directed graph so defined is complete in the sense that it describes not only the signal flow from neuron to neuron, but also the signal flow inside each neuron. When, however, the focus of attention is restricted to signal flow from neuron to neuron, we may use a reduced form of this graph by omitting the details of signal flow inside the individual neurons.Such a directed graph is said to be partially complete.

It is characterised by:

1. Source nodes which supply input signals to the graph.

2. Each neuron is represented by a single node called a computation node.

3. The communication links interconnecting the source and computation nodes of the graph carry no weight; they merely provide directions of signal flow in the graph.

A partially complete directed graph defined in this way is referred to as an architectural graph (Fig. 11.11). Describing the layout of the neural network. It is illustrated in Fig. 11.10, for the simple case of a single neuron with m source nodes and a single node fixed at +1 for the bias.

To sum up, we have three graphical representations of a neural network:

I. Block diagram, providing a functional description of the network (Fig. 11.4).

II. Signal-flow graph, providing a complete description of signal flow in the network (Fig. 11.10).

III. Architectural graph, describing the network layout (Fig.11.11).

Term Paper # 5. Feedback of Neural Networks:

Feedback is said to exist in a dynamic system whenever the output of an element in the system influences in part the input applied to that particular element, thereby giving rise to one or’ more closed paths for the transmission of signals around the system. Indeed, feedback occurs in almost every part of the nervous system of every animal. Moreover, it plays a major role in the study of a special class of neural networks known as recurrent networks.

Fig. 11.12, shows the signal- flow graph of a single-loop feedback system, where the input signal xj(n), internal signal x’j(n) and output signal yk(n) are functions of the discrete -time variable n. The system is assumed to be linear, consisting of a forward path and a feedback path which are characterised by the ‘operators’ A and B, respectively. In particular, the output of the forward, channel determines, in part, its won output through the feedback channel.

From Fig. 11.12, we readily note the following input-output relationships:

where, the square brackets are included to emphasise that A and B act as operators. Eliminating x1‘(n) between Eqs. (11.16) and (11.17), we get

We refer to A / (1- A B) as the closed-loop operator in the system, and to A B as the open-loop operator. In general, the open-loop operator is non-commutative in that BA ≠ AB. Consider, for example, the single-loop feedback system shown in Fig. 11.12., for which A is a fixed weight w; and B is a unit-delay operator z-1 whose output is delayed with respect to the input by one time unit.

We may then express the closed- loop operator of the system as:

Using the binomial expansion for (1 – wz-1)-1, we may rewrite the closed-loop operator of the system as

where, we have included square brackets to emphasise the fact that z-1 is an operator. In particular, from the definition of z-1 we have

where, xj(n – I) is a sample of the input signal delayed by I time units. Accordingly, we may express the output signally yv (n) as an infinite weighted summation of present and past samples of the input signal xj(n), as shown by (Fig. 11.13)

We now see clearly that the dynamic behavior of the system is controlled by the weight w.

In particular, we may distinguish two specific cases:

1. | w | < 1, for which the output signal yk(n) is exponentially convergent, that is, the system is stable. This is illustrated in Fig. 11.14a., for a positive u.

2. | w | > 1, for which the output signal yk(n) is divergent, that is, the system is unstable. If | w │ = 1 the divergence is linear as in Fig. 11.14b and if | w | > 1 the divergence is exponential is in Fig. 11.14c.

Stability features prominently in the study of feedback systems.

The case of | w | < 1 correspondents to a system with infinite memory in the sense that the output of the system depends on samples of the input extending into the infinite past. Moreover, the memory is fading in that the influence of a past sample is reduced exponentially with time n.

The analysis of the dynamic behaviour of neural networks involving the application of feedback is unfortunately complicated by virtue of the fact that the processing units used for the construction of the network are usually non-linear.