Assume a demonstrator that for any given state of a control system produces control input that steers the system toward the equilibrium. In this paper, we present an algorithm that uses such a demonstrator to compute a feedback control law that steers the system toward the equilibrium from any given state, and that, in addition, inherits optimality guarantees from the demonstrator. The resulting feedback control law is based on switched LQR tracking, and hence the resulting controller is much simpler and allows for a much more efficient implementation than a control law based on the direct usage of a typical demonstrator. Our algorithm is inspired by techniques from robot motion planning such as simulation based LQR trees, but also produces a Lyapunov-like function that provides a certificate for the stability of the resulting controller. And moreover, we provide rigorous convergence and optimality results for the convergence of the algorithm itself.