The ability of pretrained Transformers to remember factual knowledge is essential for knowledge-intense downstream tasks such as closed-book question answering. Existing work has shown that pretrained Transformers can recall or leverage factual knowledge that appears in the pretraining corpus to some degree. However, due to the limit of the model capacity, the ability of pretrained models to remember factual knowledge is also limited. Dai et al. (2022) find that the Feed-Forward Networks (FFNs) in pretrained Transformers store factual knowledge in a memory-like manner. Inspired by this finding, we propose a Neural Knowledge Bank (NKB) to store extra factual knowledge for pretrained Transformers. To be specific, we also regard FFNs as key-value memories, and extend them with additional memory slots. During knowledge injection, we fix the original model and inject factual knowledge into the extended memory slots, so there will be no catastrophic forgetting for the pretrained model. In addition, the view of FFNs as key-value memories makes the NKB highly interpretable. We use three closed-book question answering datasets to show our strong ability to store extra factual knowledge. Also, we prove that the NKB will not degrade the general language generation ability of pretrained models through two representative generation tasks, summarization and machine translation. Further, we thoroughly analyze the NKB to reveal its working mechanism and present the meaning of its keys and values in a human-readable way. On top of it, we perform a preliminary attempt to directly update the factual knowledge in the NKB without any additional training.