Machine learning systems are often deployed in settings where individuals are able to adapt their features to obtain a specific predicted outcome. This kind of strategic behavior leads to a sharp loss in model performance in deployment. In this work, we aim to address this problem by learning classifiers that incentivize their decision subjects to change their features in a way that benefits all parties. We frame the dynamics of prediction and adaptation as a two-stage game and characterize equilibrium strategies for the model owner and its decision subjects. We benchmark our method on simulated and real-world datasets to demonstrate how it can be used to incentivize improvement or discourage adversarial manipulation. Our empirical results show that our method outperforms existing approaches, even when our assumptions may be misspecified.