Hopfield Network and Human Learning
[Author Name(s), First M. Last, Omit Titles and Degrees]
The Hopfield network exists as an idealized, yet simple model of what is called attractor neural net dynamics. It translates well to mathematical examination. However, it is not compatible with practical computational intelligence or detailed neural modeling. Nevertheless, just like with everything, it can be modified. With modifications, the standard Hopfield net can then be used to implement continual learning through placing a cap on absolute values of link weights allowing effective functioning of the Hopfield net. There are of course some drawbacks, meaning it may be required for maintenance of the network for a large number of neurons to continually shift windows of memories.
A Hopfield network is a type of continuing artificial neural network made popular in 1982 by John Hopfield. Although Hopfield made it popular, it was described by Little, earlier in 1974. Hopfield nets, as mentioned earlier, actually serve as content-addressable memory systems that have binary threshold nodes. In order to understand how Hopfield nets can translate to continuous learning, it is important to briefly examine what they are and what they do.
Think of Hopfield nets as a collection C of what will be labeled as "training patterns." This is then defined as subsets of an N set of data elements. These are denoted from c1-c10 ... and so forth. The collection of weights signifies the Hopfield net labeled C over N and is denoted as w1,wj. These cause the neral net N to have elements of C as attractors with thresholds at each node under standard activation-spreading dynamic forces. For the training to begin, it starts with all weights beginning at base value zero (Maurer, Hersch & Billard, 2005).
Training patterns labeled as cj, are continuously cycled consecutively. The training patterns allow the nodes they contain to then turn on or activate. The weights existing in the links, through any standard Hopfield net learning version, are adjusted accordingly. R becomes the learning rate and when adjustments are made for all categories, a network turns into a trained network. Trainings patterns can then be received stimulating nodes and allowing activation to spread.
Standard Hopfield networks can operate with up to 85% of its connections deleted. This may lead to practical implementation and application of Hopfield net concepts when discussing brain or computational intelligence systems. Since only a little od the Hopfield network is required to activate the training patters, it in theory, can be applied to the neural network of the brain although the ratio can be .1 to 01. This means that Hopfield nets have limitations and cannot deal with a huge influx of information. The network cannot handle overloading.
In order for Hopfield networks to function as a continuous learning tool, Hopfield nets have to deal with a constantly changing environment. Some people simply flush links occasionally, retraining them. This does not work and is an inefficient strategy for use in a learning approach. The second is allowing the Hopfield network to occasional "unlearn" things. There is some literature that is for or against this. The anti-learning approach generates philosophical interest and may be connected to REM sleep within people.
Another approach is weight-capping meaning the binding of link weights below and above forcing the network to give precedence to the most recent memories and forgetting old memories. This may in theory, majorly reduce memory capacity. Literature suggest weight-capping could show the most promise as an approach to continuous learning utilizing ANN's.
The ANN module requires inputs act as sequences of key-points to the network. Such sequences are then stored within Hopfield networks that are linked together via a matrix of weights (W). The sequences are classified in accordance to sets of classes like c=1. This helps to store the correlation among the neuronal activities which are bounded and normalized. (Maurer, Hersch & Billard, 2005, p. 3).
Still this area of research is still limited and requires more information to actually see successful results in the experimental phase of research.
Neural networks are an important key in understanding and generating better learning techniques. Current research is aimed at mapping out human brain neuron connections. Memory and learning processes in the past, have been universally quantified for neural networks that are symmetrically connected. Nevertheless, in reality, neural networks are indeed asymmetrically connected. In a 2013 article, the researchers developed a nonequilibrium landscape. "-flux theory for asymmetrically connected neural networks. We found the landscape topography is critical in determining the global stability and function of the neural networks....
Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.
Get Started Now