Simulation of Neural Networks
SNNs and JavaNNS are now very outdated and are no longer supported or maintained. Better use a modern neural network simulator, like Google Tensorflow 2.0 or Facebook's PyTorch 1.5, which also have very good online tutorials and support GPUs.
In most of the departments research in artificial neural networks the Stuttgart Neural Network Simulator (SNNS), originally developed by a team of the chair at the University of Stuttgart, is used.SNNS is an efficient universal simulator of neural networks for Unix workstations and Unix PCs. It consists of a simulator kernel, written in ANSI C for efficiency and portability reasons, a graphical user interface under X11R6, a network compiler which is able to generate C-programms out of trained ANSI nets, a batch version of the simulator and various other analysis tools.
Currently the following learning algorithms are implemented in SNNS:
- Standard Backpropagation (BP)
- Enhanced Backpropagation with momentum term and flat spot elimination
- Batch Backpropagation
- Backpropagation with weight-decay
- Quickprop
- Resilient Propagation (Rprop)
- Backpercolation
- Counterpropagation (without neighborhood relation in the Kohonen-layer)
- Dynamic learning vector quantization (DLVQ)
- Backpropagation Through Time (BBPTT)
- Batch Backpropagation Through Time (BBPTT)
- Quickprop Through Time (QPTT)
- Cascade Correlation (CC) with embedded Backpropagation, Quickprop or Rprop
- Recurrent Cascade Correlation (RCC)
- Time-Delay-Networks (TDNN)
- Radial Basis Functions (RBF)
- Radial Basis Functions with Dynamic Decay Adjustment (RBF-DDA)
- Adaptive Resonance Theory 1 (ART1)
- Adaptive Resonance Theory 2 (ART2)
- ARTMAP Network
- Self-organizing maps (Kohonen-maps, SOM)
- Auto-associative networks with Hebbian learning or Delta-rule
- Jordan-networks
- Elman-networks and expanded hierarchic Elman-networks
- Monte-Carlo training
- Simulated Annealing
Several pruning algorithms are able to reduce the number of weights or neurons of a network in order to achieve a higher generalization performance by fewer free parameters:
- Magnitude Based Pruning (Mag)
- Optimal Brain Damage (OBD)
- Optimal Brain Surgeon (OBS)
- Skeletonization (Skel)
- Non-contributing Units
Finally, it is now possible to optimize the topology of neural nets by means of genetic algorithms by using the tool ENZO of the University of Karlsruhe which is integrated in SNNS.
For latest news and software downloads please see our SNNS page.
Contact
Prof. Dr. Andreas Zell, Tel.: (07071) 29-76455, zell@informatik.uni-tuebingen.de