In this work, we tackle the challenge of open-set recognition in surgical robotics. Our novel OSSAR framework improves the ability to classify known surgical activities while also detecting unknown activities that weren’t seen during training.
Key contributions:
โข A hyperspherical reciprocal point strategy to better separate known and unknown classes
โข A calibration technique to reduce overconfident misclassifications
โข New open-set benchmarks on the JIGSAWS dataset and our novel DREAMS dataset for endoscopic procedures
โข State-of-the-art performance on open-set surgical activity recognition tasks
This research takes an important step towards more robust and generalizable AI systems for surgical robots. We hope it will help pave the way for safer and more capable robot-assisted surgeries.
Thank all the amazing co-authors Long Bai, Guankun Wang, Jie Wang, Xiaoxiao Yang, Huxin Gao, Xin Liang, An Wang, Mobarakol Islam, and Hongliang Ren
and our institutions (The Chinese University of Hong Kong, Beijing Institute of Technology, Qilu Hospital of Shandong University, Tongji University, University College London, National University of Singapore) for their support.
You can find more details in our paper https://lnkd.in/gDsjVDSP