Opportunities of Hybrid Model-based Reinforcement Learning for Cell Therapy Manufacturing Process Development and Control

Hua Zheng, Wei Xie, Keqi Wang, Zheng Li

Driven by the key challenges of cell therapy manufacturing, including high complexity, high uncertainty, and very limited process data, we propose a stochastic optimization framework named "hybrid-RL" to efficiently guide process development and control. We first create the bioprocess probabilistic knowledge graph that is a hybrid model characterizing the understanding of biomanufacturing process mechanisms and quantifying inherent stochasticity, such as batch-to-batch variation and bioprocess noise. It can capture the key features, including nonlinear reactions, time-varying kinetics, and partially observed bioprocess state. This hybrid model can leverage on existing mechanistic models and facilitate the learning from process data. Given limited process data, a computational sampling approach is used to generate posterior samples quantifying the model estimation uncertainty. Then, we introduce hybrid model-based Bayesian reinforcement learning (RL), accounting for both inherent stochasticity and model uncertainty, to guide optimal, robust, and interpretable decision making, which can overcome the key challenges of cell therapy manufacturing. In the empirical study, cell therapy manufacturing examples are used to demonstrate that the proposed hybrid-RL framework can outperform the classical deterministic mechanistic model assisted process optimization.

Knowledge Graph



Sign up or login to leave a comment