Introduction To Neural Networks

Artificial neural networks are a method of computation and information processing that takes advantage of today's technology. Mimicking the processes found in biological neurons, artificial neural networks are used to predict and learn from a given set a data. Neural networks are more robust at data analysis than statistical methods because of their ability to handle small variations of parameters and noise.

 

The basic element of a neural network is the perceptron. First proposed by Frank Rosenblatt in 1958 at Cornell University, the perceptron has 5 basic elements: an n-vector input, weights, summing function, threshold device, and an output. Outputs are in the form of -1 and/or +1. The threshold has a setting which governs the output based on the summation of input vectors. If the summation falls below the threshold setting, a -1 is the output. If the summation exceeds the threshold setting, +1 is the output.

 

A more technical investigation of a Single Neuron Perceptron shows that it can have an input vector X of N dimensions. These inputs go through a vector W of Weights of N dimension. Processed by the Summation Node, "a" is generated where "a" is the "dot product" of vectors X and W plus a Bias. "A" is then processed through an activation function which compares the value of "a" to a predefined Threshold. If "a" is below the Threshold, the perceptron will not fire. If it is above the Threshold, the perceptron will fire one pulse whose amplitude is predefined.

A single layer neural network is capable of solving 80% of all problems with the only limitation being that it may take a long time to arrive at a solution. Adding more neural networks will increase the speed. How many neurons to use is governed by how the data is clustered and classified. The following example will illustrate this point. Let X symbolize undesired values and O symbolize desired values. If the data is distributed in a way such that it can be linearly separated, then a single perceptron can be used. For data which can not be linearly separated, multiple separation layers are used, which translates into using more perceptrons.

 

Perceptrons can be added in series or parallel. In either case, a perceptron learns by adjusting the weights in such a way as to minimize the error between the output generated and the correct answer. This is called training a neural network and is best summarized by the Perceptron Learning Theorem which states that if a solution is possible, it will be found eventually.

Today neural networks are used in areas like pattern recognition, optical character recognition, predicting outcomes, and problem classification. Neural networks can be created by either software or hardware. Hephaistus uses advanced neural network development techniques to solve problems in the area of medical decision making, content-based image retrieval, and decision tree pruning. To learn more about neural networks, books listed below are a good starting point to learn about this advanced computing technique.

 


Bibliography

 

An Introduction to Neural Networks.James A. Anderson.

MIT Press. Boston 1995.

ISBN 0-262-01144-1

 

Neural Network Design.Martin T Hagen, Howard B. Demuth, Mark Beale.

PWS Publishing Co. Boston 1996.

ISBN 0-534-94332-2

 

Neural Networks in Computer Intelligence. LiMin Fu.

McGraw-Hill, Inc. New York 1994.

ISBN 0-07-911817-8

 

Neural Network Fundatmentals with Graphics, Algorithms and Applications.

N.K. Bose, P. Liang.

McGraw-Hill, Inc. New York. 1996.

ISBN 0-07-006618-3

 

Neural Networks: A Comprehensive Foundation. Simon Haykin.

Macmillan College Publishing Company Inc.  1994

ISBN 0-02-352761-7