๐ŸŽ‰ ๐ŸŽ‰ We are thrilled to announce that our latest work on surgical workflow recognition has been accepted by the Information Fusion journal (IF: 14.8).

๐Ÿ—ž๏ธ We propose a multimodal Graph Representation network with Adversarial feature Disentanglement (GRAD) for robust surgical workflow recognition in challenging scenarios with domain shifts or corrupted data. Specifically, we introduce a Multimodal Disentanglement Graph Network (MDGNet) that captures fine-grained visual information while explicitly modeling the complex relationships between vision and kinematic embeddings through graph-based message modeling. To align feature spaces across modalities, we propose a Vision-Kinematic Adversarial (VKA) framework that leverages adversarial training to reduce modality gaps and improve feature consistency. Furthermore, we design a Contextual Calibrated Decoder, incorporating temporal and contextual priors to enhance robustness against domain shifts and corrupted data.

๐Ÿ—ž๏ธ Extensive comparative and ablation experiments demonstrate the effectiveness of our model and proposed modules. Specifically, we achieved an accuracy of 86.87% and 92.38% on two public datasets, respectively. Moreover, our robustness experiments show that our method effectively handles data corruption during storage and transmission, exhibiting excellent stability and robustness. Our approach aims to advance automated surgical workflow recognition, addressing the complexities and dynamism inherent in surgical procedures.

๐Ÿ“‘ Paper link: https://lnkd.in/g9SWYPJg

๐Ÿ‘ ๐Ÿ‘ We extend our gratitude to our co-authors and collaborators from around the world, including Long Bai, Boyi Ma, Ruohan Wang, Guankun Wang, Beilei Cui, Zhongliang Jiang, Mobarakol Islam, Zhe Min, Jiewen Lai, Nassir Navab, and Hongliang Ren!

No alternative text description for this image

๐Ÿš€ Call for Papers: ๐ˆ๐จ๐–๐“ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐Ÿš€

The ๐Ÿ๐ฌ๐ญ ๐–๐จ๐ซ๐ค๐ฌ๐ก๐จ๐ฉ ๐จ๐ง ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐ž๐ญ ๐จ๐Ÿ ๐–๐ž๐š๐ซ๐š๐›๐ฅ๐ž ๐“๐ก๐ข๐ง๐ ๐ฌ (๐ˆ๐จ๐–๐“ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“) will be held at The IEEE 11th World Forum on IoT in ๐‚๐ก๐ž๐ง๐ ๐๐ฎ, ๐‚๐ก๐ข๐ง๐š (๐Ž๐œ๐ญ ๐Ÿ๐Ÿ•โ€“๐Ÿ‘๐ŸŽ, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“)!

We invite submissions on AI-driven wearable systems, energy-efficient IoT, human-centric automation, and scalable intelligence.

๐Ÿ“… Key Dates:

๐Ÿ“ Submission Deadline: June 15, 2025

๐Ÿ“ข Notification: July 31, 2025

๐Ÿ“„ Camera-ready: August 15, 2025

๐Ÿ”— More details: IoWT 2025 Workshop (https://lnkd.in/gj_arCXX)

Our lab membersโ€™ achievements at the ๐— ๐—ฅ๐—– ๐—ฆ๐˜†๐—บ๐—ฝ๐—ผ๐˜€๐—ถ๐˜‚๐—บ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ

๐Ÿ† ๐‘ฉ๐’†๐’”๐’• ๐‘ซ๐’†๐’”๐’Š๐’ˆ๐’ ๐‘จ๐’˜๐’‚๐’“๐’…

Tinghua Zhang, Sishen YUAN et al. for “PneumaOCT: Pneumatic optical coherence tomography endoscopy for targeted distortion-free imaging in tortuous and narrow internal lumens”, a collaboration between CUHK ABI Lab (https://lnkd.in/gUuzQqDt) and RENLab (labren.org),

published in Science Advances (DOI: 10.1126/sciadv.adp3145).

๐Ÿ”ฌ ๐‘ฉ๐’†๐’”๐’• ๐‘จ๐’‘๐’‘๐’๐’Š๐’„๐’‚๐’•๐’Š๐’๐’ ๐‘จ๐’˜๐’‚๐’“๐’…

Dr. Mengya Xu, Wenjin Mo et al. for their work:

“ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-assisted Endoscopic Submucosal Dissection”, accepted at #ICRA2025 (arXiv preprint: arXiv:2411.18884).

๐ŸŒŸ Congratulations to our brilliant team members on these well-deserved recognitions!

Additionally, Prof. Hongliang Ren delivered an insightful talk, “Endoscopic Multisensory Navigation with Soft Flexible Robotics”, highlighting the latest advancements in endoscopic navigation and soft medical robotics.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ“ข ๐‚๐š๐ฅ๐ฅ ๐Ÿ๐จ๐ซ ๐๐š๐ฉ๐ž๐ซ๐ฌ โ€“ ๐ˆ๐„๐„๐„ ๐ˆ๐‚๐ˆ๐€ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐Ÿš€ Join us at the 2025 International Conference on Information and Automation (๐ˆ๐‚๐ˆ๐€ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“) in Lanzhou, China, from August 28โ€“31, 2025!

This conference serves as a platform for researchers and practitioners to discuss advancements, challenges, and opportunities in information, automation, artificial intelligence, robotics, image processing, computer vision, DSP, and BME.

๐Ÿ“… Submission Deadline: June 15, 2025 ๐Ÿ”—

Conference Website: http://www.icia2025.org/

๐ŸŽ‰ Excited to share our paper titled “Disentangling Contact Location for Stretchable Tactile Sensors from Soft Waveguide Ultrasonic Scatter Signals”, published in Advanced Intelligent Systems.

In this work, we tackled a longโ€standing challenge in soft tactile sensingโ€”accurately localizing a contact point on a stretchable sensor even in the presence of strain and variable contact forces. Our approach uses ultrasonic scatter signals extracted from a soft waveguide to decouple these intertwined effects. A data-driven method was developed, combining:

– Global feature extraction: Using the Hilbert transform to capture the overall energy distribution before and after force contact.

– Local feature extraction: Leveraging continuous wavelet transforms (CWT) to retrieve high-resolution timeโ€“frequency characteristics.

– Deep learning integration: Fusing these features through a deep convolutional neural network and multilayer perceptron regression, which allowed us to achieve a mean absolute error of just 0.627 mm and a mean relative error of 3.19%.

This fusion of global and local signal analysis not only overcomes limitations of traditional time-of-flight estimation methods but also paves the way for more robust multimodal sensing in robotics and humanโ€“machine interfaces. The implications for advanced robotics, intelligent prosthetics, and other emerging applications are truly exciting.

Authors: Zhiheng Li, Yuan Lin, Peter ShullHongliang Ren

The paper is available at https://lnkd.in/dHHgw4qp

diagram

Six of our lab members arrived in Atlanta & will be at ICRA presenting the following 7 papers

1)ย ย ย ย ย Three-Dimension Tip Force Perception and Axial Contact Location Identification for Flexible Endoscopy Using Tissue-Compliant Soft Distal Attachment Cap Sensors
2)ย ย ย ย ย TASEC: Rescaling Acquisition Strategy with Energy Constraints under Fusion Kernel for Active Incision Recommendation in Tracheotomy
3)ย ย ย ย ย ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-Assisted Endoscopic Submucosal Dissection
4)ย ย ย ย ย Advancing Dense Endoscopic Reconstruction with Gaussian Splatting-Driven Surface Normal-Aware Tracking and Mapping
5)ย ย ย ย ย Minimally Invasive Endotracheal Inside-Out Flexible Needle Driving System towards Microendoscope-Guided Robotic Tracheostomy
6)ย ย ย ย ย Variable-Stiffness Nasotracheal Intubation Robot with Passive Buffering: A Modular Platform in Mannequin Studies
7)ย ย ย ย ย SurgPLAN++: Universal Surgical Phase Localization Network for Online and Offline Inference
Looking forward to catching up!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŒŸ Exciting News! Our latest research work has been accepted by IEEE Transactions on Instrumentation and Measurement (TIM). Introducing our newest paper: “Endotracheal Untethered Retractable Drill Mechanism toward Robot-assisted Endoluminal Inside-Out Dilatational Tracheostomy”, led by Sishen YUAN and Baijia Liang.

๐Ÿ” This paper addresses several key challenges in PDT, including cross-contamination, constrained operative space, complex anatomy, and the lack of tactile feedback during manual procedures. To overcome these barriers, we propose an โ€œinside-outโ€ robotic system equipped with a retractable drill and wireless magnetic actuation. By integrating feedback from tactile and magnetic sensors along with precise control mechanisms, the system enhances the safety of tracheal punctures, preventing damage to adjacent tissues, such as the esophagus, and reducing reliance on manual expertise.

๐Ÿ”ฌ Ex vivo experiments on porcine tracheas validated the feasibility of this approach, demonstrating effective puncture with a maximum localization deviation of 6.308 mm, preliminarily confirming the systemโ€™s potential to achieve safer and more consistent outcomes in clinical settings.

Paper link: https://lnkd.in/gEgmaDVj

diagram

We are pleased to share our latest work, Automatic Virtual-to-Real Calibration and Dynamic Registration of Deformable Tissue for Endoscopic Submucosal Dissection, recently accepted in Advanced Intelligent Systems. ๐ŸŽ‰ ๐ŸŽ‰

This research presents an automatic calibration and dynamic registration method specifically designed for deformable tissues, integrating Augmented Reality (AR) technology to enhance surgical precision in Endoscopic Submucosal Dissection (ESD).

Our approach leverages a 6D pose estimator to align virtual and real-world target tissues seamlessly, utilizing the SuperGlue feature-matching network and the Metric3D depth estimation network for robust fusion. Additionally, our dynamic registration method enables real-time tracking of tissue deformation, ensuring more reliable surgical guidance.

Experimental validation demonstrated the effectiveness of our system, with automatic calibration experiments using cloth achieving a mean absolute error (MAE) of 3.79 ยฑ 0.64 mm. Dynamic registration accuracy was assessed under varying tissue deformation, yielding an MAE of 6.03 ยฑ 0.96 mm. Ex-vivo experiments with porcine small intestine tissue further validated our systemโ€™s performance, with an AR calibration MAE of 3.11 ยฑ 0.56 mm and a dynamic registration MAE of 3.20 ยฑ 1.96 mm.

The full paper is in production and will be available at https://lnkd.in/gMFCDWv4

timeline

Weโ€™re delighted to share that our paper, Innovating Robot-Assisted Surgery through Large Vision Models, has been accepted for publication in Nature Reviews Electrical Engineering! ๐ŸŽ‰

In this paper, we explore the role of large vision models in advancing robot-assisted surgery, analyzing key developments and discussing future directions in AI-driven surgical innovation. By examining emerging trends and challenges, we contribute to the broader conversation on how intelligent visual systems can enhance precision, adaptability, and decision-making in surgical robotics.

Congrats to Prof. Zhe Min, Prof. Sam, Jiewen Lai, and Prof. Hongliang Ren!

Check out the full paper here: https://rdcu.be/elIbb

diagram

๐Ÿš€ Thrilled to Share Highlights from #RoboSoft2025 โ€“ The 8th IEEE-RAS International Conference on Soft Robotics!

At the workshop “Origami and Kirigami: How Paper Folding and Cutting Have Revolutionized Soft Robotics and What’s Beyond” (https://lnkd.in/gsYvat-X), Professor Hongliang Ren delivered a keynote speech titled “Tetherless Reconfigurations at Origami-Continuum Interfaces.” Drawing from our group’s recent researches published in top-tier journals like Science Robotics, Nature Communications, Advanced Functional Materials, ACS Nano, and Advanced Materials Technology, he explored how innovative materialsโ€”such as high-temperature-resistant, phase-change elastic, and magnetically responsive onesโ€”are revolutionizing origami-based robots. These works enable precise, continuous motion for in vivo medical applications and highlights the vast interdisciplinary opportunities in medical soft robotics.

Additionally, our PhD student WENCHAO YUE delivered three oral presentations based on our accepted papers. Specifically, he showcased a compact OCT-based tactile sensor (2 mm in diameter, developed in collaboration with ABI Lab (https://lnkd.in/gUuzQqDt) under Prof. Wu ‘Scott’ YUAN at CUHK BME), a bistable origami robot designed for microneedle puncture (in partnership with Xu’s Lab (https://lnkd.in/gdb-5tTv), led by Prof. CHENJIE XU at CityU BME), and an ink-based stretching sensor inspired by kirigami principles. His talks highlighted the latest research progress in our group and the exciting potential of these technologies in soft robotics for medical applications.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image