Liquid State Machines, Echo State Networks and the Backpropagation-Decorrelation learning rule all use a randomly constructed recurrent neural network as a dynamic reservoir that does a nonlinear premapping of current and previous inputs. Many diverse topologies, weight distributions and neuron models have been described in literature. However, a comparison between these fundamentally similar methods is not yet available.
In this talk, we present some recent experimental results that present both an
overview and a unification of the different forms of reservoir computing
methods mentioned above. We describe various benchmark tests with quite
different characteristics regarding timescale and complexity, including
speech recognition of isolated digits and a memorization task. We discuss the
influence of node complexity versus reservoir size on the performance of the
reservoir. We also link several global weight scaling parameters to the
resulting internal dynamics of the reservoir. Finally, we investigate the
interaction between node memory (controlled by self-feedback) and reservoir
memory (controlled by reservoir size) on different benchmark tests. We
conclude with some remarks on the underlying similarities and differences
between the different forms of reservoir computing described in literature.