Biopharmaceutical process development presents inherent complexity, demanding the integration of experimental data, mechanistic modeling, and computational workflows to ensure efficient and robust manufacturing. Traditional modeling approaches often operate in silos, lack modularity, and are challenging to adapt across diverse unit operations and scales, limiting their practical utility in end-to-end process design.
This talk presents a Python-based integrated modeling framework designed to support end-to-end process development by seamlessly linking upstream and downstream unit operations. Built on a modular architecture rooted in mechanistic modeling, the framework facilitates scalable, reusable, and interoperable models applicable to both individual unit operations and entire process trains.
Beyond its mechanistic foundation, the framework is engineered for flexibility. It enables the integration of complementary modeling strategies, including surrogate modeling and data-driven techniques, allowing users to dynamically adjust model complexity based on available data, evolving process conditions, and project objectives. Core features include multi-scale modeling capabilities, parameter estimation, and sensitivity analysis, collectively enhancing decision-making throughout the development lifecycle.
Case studies in bioreactor control, chromatography characterization, and filtration optimization demonstrate the framework’s ability to reduce process variability, improve mechanistic insight, and support quality-by-design principles. These applications highlight the value of structured modeling in accelerating process understanding and enhancing product quality.
This framework contributes to advancing the emerging paradigm of digital bioprocessing, where modeling tools are deeply embedded in CMC workflows. By bridging mechanistic and data-driven approaches, the framework enables smarter, more adaptive process development, enriching the state of knowledge in chemical and biochemical engineering.