Reservoir computing (RC) is a computing scheme related to recurrent neural network theory. As a model for neural activity in the brain, it attracts a lot of attention, especially because of its very simple training method. However, building a functional, on-chip, photonic implementation of RC remains a challenge. Scaling delay lines down from optical fiber scale to chip scale results in RC systems that compute faster, but at the same time requires that the input signals be scaled up in speed, which might be impractical or expensive. In this brief, we show that this problem can be alleviated by a masked RC system in which the amplitude of the input signal is modulated by a binary-valued mask. For a speech recognition task, we demonstrate that the necessary input sample rate can be a factor of 40 smaller than in a conventional RC system. In addition, we also show that linear discriminant analysis and input matrix optimization is a well-performing alternative to linear regression for reservoir training.