Towards Safe Propofol Dosing during General Anesthesia Using Deep Offline Reinforcement Learning. (arXiv:2303.10180v1 [cs.LG])

Automated anesthesia promises to enable more precise and personalized
anesthetic administration and free anesthesiologists from repetitive tasks,
allowing them to focus on the most critical aspects of a patient’s surgical
care. Current research has typically focused on creating simulated environments
from which agents can learn. These approaches have demonstrated good
experimental results, but are still far from clinical application. In this
paper, Policy Constraint Q-Learning (PCQL), a data-driven reinforcement
learning algorithm for solving the problem of learning anesthesia strategies on
real clinical datasets, is proposed. Conservative Q-Learning was first
introduced to alleviate the problem of Q function overestimation in an offline
context. A policy constraint term is added to agent training to keep the policy
distribution of the agent and the anesthesiologist consistent to ensure safer
decisions made by the agent in anesthesia scenarios. The effectiveness of PCQL
was validated by extensive experiments on a real clinical anesthesia dataset.
Experimental results show that PCQL is predicted to achieve higher gains than
the baseline approach while maintaining good agreement with the reference dose
given by the anesthesiologist, using less total dose, and being more responsive
to the patient’s vital signs. In addition, the confidence intervals of the
agent were investigated, which were able to cover most of the clinical
decisions of the anesthesiologist. Finally, an interpretable method, SHAP, was
used to analyze the contributing components of the model predictions to
increase the transparency of the model.

Source: https://arxiv.org/abs/2303.10180

webmaster

Related post