2025 AIChE Annual Meeting
(474g) AI-Assisted Lptem: A Unified Framework for Nanoparticle Tracking and Physics-Informed Generative Modeling
Authors
To address this gap, we developed SAM4EM, an interactive framework that leverages Meta’s Segment Anything Model 2 (SAM2)—a foundation model for image and video segmentation—for zero-shot segmentation of LPTEM videos. SAM4EM enables promptable segmentation of noisy LPTEM data with minimal user input, achieving nearly 50-fold higher accuracy than existing segmentation methods on simulated datasets. Once segmented, particle positions are tracked across frames to yield particle trajectories. To interpret the extracted trajectories and uncover the physics governing nanoparticle motion, we developed LEONARDO (Learning Electron microscopy Of NAnopaRticle Diffusion via an attention netwOrk)—a transformer-based variational autoencoder trained on tens of thousands of experimental LPTEM trajectories. LEONARDO uses a physics-informed loss function to learn statistical features of trajectories such as non-Gaussian displacements and velocity autocorrelation, encoding them in a latent space that enables generation of synthetic trajectories reflecting the complexity of experimental data.
Together, SAM4EM and LEONARDO establish a unified AI-assisted framework for LPTEM, one that bridges robust trajectory extraction with physics-informed generative modeling. This combined toolset standardizes particle tracking and enables deep analysis of nanoparticle motion in complex environments, paving the way for data-driven insights into interfacial dynamics in biological systems, catalytic materials, and colloidal suspensions. By simulating realistic LPTEM trajectories, LEONARDO also opens new opportunities for automation in microscopy workflows and in silico exploration of experimental conditions.