Learning to Steer by

Mimicking Features from Heterogeneous Auxiliary Networks

Yuenan Hou      Zheng Ma      Chunxiao Liu      Chen Change Loy
AAAI Conference on Artificial Intelligence (AAAI) 2019


The training of many existing end-to-end steering angle prediction models heavily relies on steering angles as the supervisory signal. Without learning from much richer contexts, these methods are susceptible to the presence of sharp road curves, challenging traffic conditions, strong shadows, and severe lighting changes. In this paper, we considerably improve the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides us with much richer contextual signals apart from steering direction. Specifically, we train our steering angle predictive model by distilling multi-layer knowledge from multiple heterogeneous auxiliary networks that perform related but different tasks, e.g., image segmentation or optical flow estimation. As opposed to multi-task learning, our method does not require expensive annotations of related tasks on the target set. This is made possible by applying contemporary off-the-shelf networks on the target set and mimicking their features in different layers after transformation. The auxiliary networks are discarded after training without affecting the runtime efficiency of our model. Our approach achieves a new state-of-the-art on Udacity and Comma.ai, outperforming the previous best by a large margin of 12.8% and 52.1%, respectively. Encouraging results are also shown on Berkeley Deep Drive (BDD) dataset.


Code and Models


 title={Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks},
 author={Hou, Yuenan and Ma, Zheng and Liu, Chunxiao and Loy, Chen Change},
 journal={arXiv preprint arXiv:1811.02759},