Autoassociative networks store single instances of items, and can be thought of in a way similar to human memory. In an autoassociative network each pattern presented to the network serves as both the input and the output pattern. Autoassociative networks are typically used for tasks involving pattern completion. During retrieval, a probe pattern is presented to the network causing the network to display a composite of its learned patterns most consistent with the new information.
Autoassociative networks typically consist of a single layer of nodes with each node representing some feature of the environment. However in SNNS they are represented by two layers to make it easier to compare the input to the output. The following section explains the layout in more detail.
Autoassociative networks must use the update function RM_Synchronous and the initialization function RM_Random_Weights. The use of others may destroy essential characteristics of the autoassociative network. Please note, that the update function RM_Synchronous needs as a parameter the number of iterations performed before the network output is computed. 50 has shown to be very suitable here.
All the implementations of autoassociative networks in SNNS report error as the sum of squared error between the input pattern on the world layer and the resultant pattern on the learning layer after the pattern has been propagated a user defined number of times.