KI-MedTec is a new conference dedicated exclusively to the topic of artificial intelligence in medical technology. Over two days, an expert audience will meet here to discuss the current state of the art as well as upcoming challenges. We look forward to discussing our solutions for "Explainable AI" with you.
At the event, two of our artificial intelligence experts will give a presentation on AI validation:
June 12, 2024 | Dr. Khanlian Chung | Dennis Hügle | Vector
Modern AI methods such as Deep Learning and Convolutional Neural Networks have the potential to dramatically impact the field of automated medical analysis: For example, they allow physicians to easily access segmentation maps of anatomical structures such as organs or blood vessels, or bio-markers for pathological diseases in ECG, PPG signals. Despite all the benefits, deep learning algorithms have one major drawback: They are inherently black boxes.
The results are not transparent; we do not know what factors influenced the AI's decision. This makes them extremely difficult to test and validate. In a medical setting, however, we need to assess potential risks and failures that could lead to catastrophic outcomes. Regulations, guidelines, norms, and standards attempt to provide AI companies with guidance on how to test their medical AI applications.
In this talk, we will present a use case and show what methods we have today to test a state-of-the-art AI. We will present explainability methods, robustness tests, and key performance metrics to validate an example application.
We are looking forward to your visit and to some exciting discussions!