Delay-coupled optoelectronic systems form promising candidates to act as powerful information processing devices. In this brief, we consider such a system that has been studied before in the context of reservoir computing (RC). Instead of viewing the system as a random dynamical system, we see it as a true machine-learning model, which can be fully optimized. We use a recently introduced extension of backpropagation through time, an optimization algorithm originally designed for recurrent neural networks, and use it to let the network perform a difficult phoneme recognition task. We show that full optimization of all system parameters of delay-coupled optoelectronics systems yields a significant improvement over the previously applied RC approach.