Explainable AI Research Group
The XAI (Explainable Artificial Intelligence) research group at LIACS, Leiden University focuses on making AI and Evolutionary Computing (EC) systems more understandable and interpretable. Led by Dr. Niki van Stein, the team delves into methods and techniques that allow for the explanation of AI decision-making processes, aiming to enhance transparency and trust in AI technologies. Their work spans various scientific domains and industry applications, such as predictive maintenance, the analysis of heuristic optimization algorithms and the development of novel explainable AI methods. This interdisciplinary effort involves collaboration with experts in machine learning, optimization, and domain-specific experts to develop explainable systems that are both effective and user-friendly.
Research projects:
AI for Oversight
AI for Oversight ICAI lab
XAIPre
eXplainable AI for Predictive Maintenance
Complex Lens Design
Optimization of Complex Lens Designs
CIMPLO
Cross-Industry Predictive Maintenance Optimization Platform