In the process of code generation, it is essential to guarantee the generated code satisfies grammar constraints of programming language (PL). However, the general failure to ensure grammatical correctness is a fatal drawback of sequence-based models, which are commonly used in code generation. In this paper, we devise a pushdown automaton (PDA)-based methodology to address this problem, exploiting the principle that PL is a subset of PDA recognizable language and code accepted by PDA is grammatical. Specifically, we construct a PDA module and design an algorithm to constrain the generation of sequence-based models to ensure grammatical correctness. Guided by this methodology, we further propose CodePAD, a sequence-based code generation framework equipped with the PDA module, to integrate the deduction of PDA into deep learning. Additionally, this framework can leverage states of PDA deduction (including state representation, state prediction task, and joint prediction with state) to assist models in understanding PDA deduction. To comprehensively evaluate CodePAD, we construct a PDA for Python and conduct extensive experiments on four public benchmark datasets. CodePAD can leverage any existing sequence-based model, and we show that it relatively improves 17\% BLEU on CONALA, 8\% EM on DJANGO, and 55\% BLEU on JUICE-10K compared to base models. In addition, our method significantly enhances pre-trained models, e.g., (BLEU, CodeBLEU) of CodeGen-350M improvement from (1.55, 3.21) to (14.44, 21.54) on MBPP in zero-shot setting.