While advancements in generative AI offer promising avenues for innovation in process safety, the use of AI in analyzing complex text datasets such as PHA recommendations remains a challenging endeavor. This paper presents a case study in which generative AI was used to analyze a dataset of 800 PHA recommendations, with the objective of identifying trends, enhancing risk understanding, and pinpointing key areas for program improvement. Despite initial expectations, the analysis revealed limitations in generative AI's ability to accurately identify and contextualize trends within the data. The complexity and specificity of safety recommendations posed challenges for generative AI, which struggled to consistently categorize nuanced information or distinguish patterns that require subject matter expert-level context and domain-specific insights.
This research underscores the gap between generative AI's capabilities and the high precision required in process safety applications, raising important questions about the applicability of generative AI for automated decision-support in safety-critical environments. While generative AI showed potential in assisting with straightforward engineering evaluations during PHAs, it fell short in delivering actionable insights for trend analysis and risk identification. This paper discusses the limitations observed, along with the ethical implications of relying on generative AI in such contexts. The findings highlight the importance of integrating expert oversight in generative AI-assisted analyses and propose avenues for refining AI approaches to better support the rigorous demands of process safety.