2025 AIChE Annual Meeting

(469d) Learning & Solving: Equation Discovery across Disjoint Domains

Authors

Gianluca Fabiani, Universitá degli Studi di Napoli Federico II
Somdatta Goswami, Johns Hopkins University
In this work, we explore how to learn governing equations from partial and spatially separated data ensembles, focusing on the challenge of information sharing between disconnected regions—a problem reminiscent of communication between agents in generative AI systems [1, 2]. Specifically, we consider two disjoint space-time “corridors” of data (produced from a one-dimensional, nonlinear, parabolic partial differential equation), each with sensor measurements near their boundaries and ask: What is the minimal amount of shared information required to recover the underlying dynamics? Once the underlying dynamics are recovered, the missing region can then be appropriately filled in.

We begin by applying a “vanilla” dense neural network (DNN/MLP) to learn from both corridors jointly, in a supervised manner, but find that it fails to capture the dynamics. We then utilize a Deep Hidden Physics Model (DHPM) [3, 4], which employs one network to model the solution and another to infer the governing equation, therefore forcing the model to understand not only the provided data, but its derivatives as well. We show that when the full space of candidate derivatives is represented across the two regions, the model can recover the correct equation—highlighting how a shared symbolic structure to bridge spatial gaps. Finally, we explore the use of multi-agent DHPM, in which each corridor is modeled by independent networks that are connected through a separate network forced to learn the same equation. This setup allows us to investigate the minimal information each agent must contribute for successful joint learning. We demonstrate this framework on the FitzHugh-Nagumo model [5].

Our findings are reminiscent of “in-painting” in image processing: missing information in one region can be recovered using structural constraints shared with another. The work also draws loose parallels to multi-agent AI, where entities must communicate in a common “language” to solve tasks jointly. We conclude with potential insights from our toy example into the nature of knowledge transfer between agents: how much information must be shared, and what form it must take, to enable collaborative understanding of complex dynamical systems from fragmented, incomplete data.

[1] Shu, R., Das, N., Yuan, M., Sunkara, M., & Zhang, Y. (2024). Towards Effective GenAI Multi-Agent Collaboration: Design and Evaluation for Enterprise Applications. Arxiv preprint.

[2] Di Wu, Xian Wei, Guang Chen, Hao Shen, Xiangfeng Wang, Wenhao Li, & Bo Jin. (2025). Generative Multi-Agent Collaboration in Embodied AI: A Systematic Review. Arxiv preprint.

[3] González-García, R., Rico-Martínez, R., & Kevrekidis, I. (1998). Identification of distributed parameter systems: A neural net based approach. Computers & Chemical Engineering, 22, S965–S968.

[4] Maziar Raissi (2018). Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations. Journal of Machine Learning Research, 19(25), 1–24.

[5] FitzHugh, R. (1961). Impulses and Physiological States in Theoretical Models of Nerve Membrane. Biophysical Journal, 1(6), 445–466.