๐ŸŽ‰ ๐ŸŽ‰ We are thrilled to announce that our latest work on surgical workflow recognition has been accepted by the Information Fusion journal (IF: 14.8).

๐Ÿ—ž๏ธ We propose a multimodal Graph Representation network with Adversarial feature Disentanglement (GRAD) for robust surgical workflow recognition in challenging scenarios with domain shifts or corrupted data. Specifically, we introduce a Multimodal Disentanglement Graph Network (MDGNet) that captures fine-grained visual information while explicitly modeling the complex relationships between vision and kinematic embeddings through graph-based message modeling. To align feature spaces across modalities, we propose a Vision-Kinematic Adversarial (VKA) framework that leverages adversarial training to reduce modality gaps and improve feature consistency. Furthermore, we design a Contextual Calibrated Decoder, incorporating temporal and contextual priors to enhance robustness against domain shifts and corrupted data.

๐Ÿ—ž๏ธ Extensive comparative and ablation experiments demonstrate the effectiveness of our model and proposed modules. Specifically, we achieved an accuracy of 86.87% and 92.38% on two public datasets, respectively. Moreover, our robustness experiments show that our method effectively handles data corruption during storage and transmission, exhibiting excellent stability and robustness. Our approach aims to advance automated surgical workflow recognition, addressing the complexities and dynamism inherent in surgical procedures.

๐Ÿ“‘ Paper link: https://lnkd.in/g9SWYPJg

๐Ÿ‘ ๐Ÿ‘ We extend our gratitude to our co-authors and collaborators from around the world, including Long Bai, Boyi Ma, Ruohan Wang, Guankun Wang, Beilei Cui, Zhongliang Jiang, Mobarakol Islam, Zhe Min, Jiewen Lai, Nassir Navab, and Hongliang Ren!

No alternative text description for this image
Bookmark the permalink.

Comments are closed.