Abstracts


Integrating geometric deep learning with explainable artificial intelligence to assist protein engineering

Presenter: Dr. David Medina-Ortiz PhD ()

Authors:
Dr. David Medina-Ortiz PhD; Dr. Mehdi D. Davari PhD

Leibniz Institute of Plant Biochemistry (IPB)

Designing proteins with desirable properties has many applications in biotechnology. Traditional protein engineering approaches have benefited from advancements in artificial intelligence (AI), yet integrating AI into protein engineering introduces challenges in explainability and interpretability. Explainable Artificial Intelligence (XAI) addresses these challenges by providing insights into predictive model decision-making, identifying critical amino acid residues, uncovering physicochemical properties, and guiding protein design strategies transparently.

This work presents an XAI-based framework for protein design, integrating geometric deep learning, transformers, and XAI. Predictive models are developed using graph neural networks and embeddings from pre-trained protein language models. Graph-based methods, attention mechanisms, and feature engineering detect key residues and properties, enabling explanation of model predictions. These insights guide targeted modifications to enhance protein fitness by linking predictions to key residues and properties.

Validated on datasets like GB1 and TrpB, the framework identifies critical residues and properties that can improve protein fitness. The integration of generative strategies based on diffusion models is also being explored for E(3)-equivariant protein generation. This framework accelerates the design process and advances AI-driven protein engineering by enhancing interpretability, reliability, and predictive accuracy.

 

Go back

up