Doctoral Study

Explainable Artificial Intelligence (XAI)

🧬 Research Focus

Currently, I focus on the field of Explainable AI (XAI). The goal of my research is to find and develop methods that allow explaining the decision-making process of complex "black-box" machine learning models, especially deep neural networks.

Transparency and interpretability are crucial for deploying AI in critical domains such as medicine or finance.

🧪 Current Status

  • Phase

    Literature review and formulation of research questions.

  • Methods

    Analysis of existing XAI techniques (SHAP, LIME, Attention weights).

  • Note

    Publications and research results will be gradually added to this page.