Forecasting trajectories of human-driven vehicles is a crucial problem in autonomous driving. Trajectory forecasting in the urban area is particularly hard due to complex interactions with actors (cars and pedestrians), and traffic lights (TLs). Unlike the former that has been actively studied, the impact of TLs on the prediction has rarely been discussed. Inspired by the fact that human drives differently depending on phases (green, yellow, red) and timings (elapsed time), we propose a novel approach to the trajectory forecasting problem. In our approach, we take the states of TLs as part of the conditional inputs to our deep-learning models (Human Policy Models) which map a sequence of a vehicle's states and a context to a subsequent action (longitudinal acceleration) of the vehicle. Trained on real-world naturalistic driving data recorded near a signalized intersection over 2 years, the models learn how human drivers react to various states of TLs. These Human Policy Models are then used in trajectory forecasting; the key idea is to utilize the future phases and timings of TLs obtained through vehicle-to-infrastructure communications. Accordingly, an ablation study is presented to show that utilization of the phases and timings of TL significantly improves the accuracy of the forecasts. Finally, our probabilistic Human Policy Models provides probabilistic contexts for the forecasts and captures competing policies, for example, pass or stop in the yellow-light dilemma zone.