One-shot domain adaptation in video-based assessment of surgical skills. (arXiv:2301.00812v1 [cs.CV])

Deep Learning (DL) has achieved automatic and objective assessment of
surgical skills. However, DL models are data-hungry and restricted to their
training domain. This prevents them from transitioning to new tasks where data
is limited. Hence, domain adaptation is crucial to implement DL in real life.
Here, we propose a meta-learning model, A-VBANet, that can deliver
domain-agnostic surgical skill classification via one-shot learning. We develop
the A-VBANet on five laparoscopic and robotic surgical simulators.
Additionally, we test it on operating room (OR) videos of laparoscopic
cholecystectomy. Our model successfully adapts with accuracies up to 99.5% in
one-shot and 99.9% in few-shot settings for simulated tasks and 89.7% for
laparoscopic cholecystectomy. For the first time, we provide a domain-agnostic
procedure for video-based assessment of surgical skills. A significant
implication of this approach is that it allows the use of data from surgical
simulators to assess performance in the operating room.

Source: https://arxiv.org/abs/2301.00812

webmaster

Related post