Safe reinforcement learning for medical applications

This project aims to develop safer and more secure AI decisions-making systems for the medical domain. We plan to develop a new learning approach which combines probabilistic model checking and reinforcement learning and provides formal safety guarantees for the learned policies. This learning approach will be integrated into an adversarial learning framework which trains a target agent and an adversarial agent simultaneously. The goal is to make the target agent “immune” to adversarial attacks, thus improving the security of the system. Finally, we will apply our method to Datarwe’s medical applications such as COVID-19 respirator and drug dosage related decision-making.

Discover more from Datarwe

Subscribe now to keep reading and get access to the full archive.

Continue reading