Our lab membersโ€™ achievements at the ๐— ๐—ฅ๐—– ๐—ฆ๐˜†๐—บ๐—ฝ๐—ผ๐˜€๐—ถ๐˜‚๐—บ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ

๐Ÿ† ๐‘ฉ๐’†๐’”๐’• ๐‘ซ๐’†๐’”๐’Š๐’ˆ๐’ ๐‘จ๐’˜๐’‚๐’“๐’…

Tinghua Zhang, Sishen YUAN et al. for “PneumaOCT: Pneumatic optical coherence tomography endoscopy for targeted distortion-free imaging in tortuous and narrow internal lumens”, a collaboration between CUHK ABI Lab (https://lnkd.in/gUuzQqDt) and RENLab (labren.org),

published in Science Advances (DOI: 10.1126/sciadv.adp3145).

๐Ÿ”ฌ ๐‘ฉ๐’†๐’”๐’• ๐‘จ๐’‘๐’‘๐’๐’Š๐’„๐’‚๐’•๐’Š๐’๐’ ๐‘จ๐’˜๐’‚๐’“๐’…

Dr. Mengya Xu, Wenjin Mo et al. for their work:

“ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-assisted Endoscopic Submucosal Dissection”, accepted at #ICRA2025 (arXiv preprint: arXiv:2411.18884).

๐ŸŒŸ Congratulations to our brilliant team members on these well-deserved recognitions!

Additionally, Prof. Hongliang Ren delivered an insightful talk, “Endoscopic Multisensory Navigation with Soft Flexible Robotics”, highlighting the latest advancements in endoscopic navigation and soft medical robotics.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ“ข ๐‚๐š๐ฅ๐ฅ ๐Ÿ๐จ๐ซ ๐๐š๐ฉ๐ž๐ซ๐ฌ โ€“ ๐ˆ๐„๐„๐„ ๐ˆ๐‚๐ˆ๐€ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐Ÿš€ Join us at the 2025 International Conference on Information and Automation (๐ˆ๐‚๐ˆ๐€ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“) in Lanzhou, China, from August 28โ€“31, 2025!

This conference serves as a platform for researchers and practitioners to discuss advancements, challenges, and opportunities in information, automation, artificial intelligence, robotics, image processing, computer vision, DSP, and BME.

๐Ÿ“… Submission Deadline: June 15, 2025 ๐Ÿ”—

Conference Website: http://www.icia2025.org/

๐ŸŽ‰ Excited to share our paper titled “Disentangling Contact Location for Stretchable Tactile Sensors from Soft Waveguide Ultrasonic Scatter Signals”, published in Advanced Intelligent Systems.

In this work, we tackled a longโ€standing challenge in soft tactile sensingโ€”accurately localizing a contact point on a stretchable sensor even in the presence of strain and variable contact forces. Our approach uses ultrasonic scatter signals extracted from a soft waveguide to decouple these intertwined effects. A data-driven method was developed, combining:

– Global feature extraction: Using the Hilbert transform to capture the overall energy distribution before and after force contact.

– Local feature extraction: Leveraging continuous wavelet transforms (CWT) to retrieve high-resolution timeโ€“frequency characteristics.

– Deep learning integration: Fusing these features through a deep convolutional neural network and multilayer perceptron regression, which allowed us to achieve a mean absolute error of just 0.627 mm and a mean relative error of 3.19%.

This fusion of global and local signal analysis not only overcomes limitations of traditional time-of-flight estimation methods but also paves the way for more robust multimodal sensing in robotics and humanโ€“machine interfaces. The implications for advanced robotics, intelligent prosthetics, and other emerging applications are truly exciting.

Authors: Zhiheng Li, Yuan Lin, Peter ShullHongliang Ren

The paper is available at https://lnkd.in/dHHgw4qp

diagram

Six of our lab members arrived in Atlanta & will be at ICRA presenting the following 7 papers

1)ย ย ย ย ย Three-Dimension Tip Force Perception and Axial Contact Location Identification for Flexible Endoscopy Using Tissue-Compliant Soft Distal Attachment Cap Sensors
2)ย ย ย ย ย TASEC: Rescaling Acquisition Strategy with Energy Constraints under Fusion Kernel for Active Incision Recommendation in Tracheotomy
3)ย ย ย ย ย ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-Assisted Endoscopic Submucosal Dissection
4)ย ย ย ย ย Advancing Dense Endoscopic Reconstruction with Gaussian Splatting-Driven Surface Normal-Aware Tracking and Mapping
5)ย ย ย ย ย Minimally Invasive Endotracheal Inside-Out Flexible Needle Driving System towards Microendoscope-Guided Robotic Tracheostomy
6)ย ย ย ย ย Variable-Stiffness Nasotracheal Intubation Robot with Passive Buffering: A Modular Platform in Mannequin Studies
7)ย ย ย ย ย SurgPLAN++: Universal Surgical Phase Localization Network for Online and Offline Inference
Looking forward to catching up!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŒŸ Exciting News! Our latest research work has been accepted by IEEE Transactions on Instrumentation and Measurement (TIM). Introducing our newest paper: “Endotracheal Untethered Retractable Drill Mechanism toward Robot-assisted Endoluminal Inside-Out Dilatational Tracheostomy”, led by Sishen YUAN and Baijia Liang.

๐Ÿ” This paper addresses several key challenges in PDT, including cross-contamination, constrained operative space, complex anatomy, and the lack of tactile feedback during manual procedures. To overcome these barriers, we propose an โ€œinside-outโ€ robotic system equipped with a retractable drill and wireless magnetic actuation. By integrating feedback from tactile and magnetic sensors along with precise control mechanisms, the system enhances the safety of tracheal punctures, preventing damage to adjacent tissues, such as the esophagus, and reducing reliance on manual expertise.

๐Ÿ”ฌ Ex vivo experiments on porcine tracheas validated the feasibility of this approach, demonstrating effective puncture with a maximum localization deviation of 6.308 mm, preliminarily confirming the systemโ€™s potential to achieve safer and more consistent outcomes in clinical settings.

Paper link: https://lnkd.in/gEgmaDVj

diagram

We are pleased to share our latest work, Automatic Virtual-to-Real Calibration and Dynamic Registration of Deformable Tissue for Endoscopic Submucosal Dissection, recently accepted in Advanced Intelligent Systems. ๐ŸŽ‰ ๐ŸŽ‰

This research presents an automatic calibration and dynamic registration method specifically designed for deformable tissues, integrating Augmented Reality (AR) technology to enhance surgical precision in Endoscopic Submucosal Dissection (ESD).

Our approach leverages a 6D pose estimator to align virtual and real-world target tissues seamlessly, utilizing the SuperGlue feature-matching network and the Metric3D depth estimation network for robust fusion. Additionally, our dynamic registration method enables real-time tracking of tissue deformation, ensuring more reliable surgical guidance.

Experimental validation demonstrated the effectiveness of our system, with automatic calibration experiments using cloth achieving a mean absolute error (MAE) of 3.79 ยฑ 0.64 mm. Dynamic registration accuracy was assessed under varying tissue deformation, yielding an MAE of 6.03 ยฑ 0.96 mm. Ex-vivo experiments with porcine small intestine tissue further validated our systemโ€™s performance, with an AR calibration MAE of 3.11 ยฑ 0.56 mm and a dynamic registration MAE of 3.20 ยฑ 1.96 mm.

The full paper is in production and will be available at https://lnkd.in/gMFCDWv4

timeline

Weโ€™re delighted to share that our paper, Innovating Robot-Assisted Surgery through Large Vision Models, has been accepted for publication in Nature Reviews Electrical Engineering! ๐ŸŽ‰

In this paper, we explore the role of large vision models in advancing robot-assisted surgery, analyzing key developments and discussing future directions in AI-driven surgical innovation. By examining emerging trends and challenges, we contribute to the broader conversation on how intelligent visual systems can enhance precision, adaptability, and decision-making in surgical robotics.

Congrats to Prof. Zhe Min, Prof. Sam, Jiewen Lai, and Prof. Hongliang Ren!

Check out the full paper here: https://rdcu.be/elIbb

diagram

๐Ÿš€ Thrilled to Share Highlights from #RoboSoft2025 โ€“ The 8th IEEE-RAS International Conference on Soft Robotics!

At the workshop “Origami and Kirigami: How Paper Folding and Cutting Have Revolutionized Soft Robotics and What’s Beyond” (https://lnkd.in/gsYvat-X), Professor Hongliang Ren delivered a keynote speech titled “Tetherless Reconfigurations at Origami-Continuum Interfaces.” Drawing from our group’s recent researches published in top-tier journals like Science Robotics, Nature Communications, Advanced Functional Materials, ACS Nano, and Advanced Materials Technology, he explored how innovative materialsโ€”such as high-temperature-resistant, phase-change elastic, and magnetically responsive onesโ€”are revolutionizing origami-based robots. These works enable precise, continuous motion for in vivo medical applications and highlights the vast interdisciplinary opportunities in medical soft robotics.

Additionally, our PhD student WENCHAO YUE delivered three oral presentations based on our accepted papers. Specifically, he showcased a compact OCT-based tactile sensor (2 mm in diameter, developed in collaboration with ABI Lab (https://lnkd.in/gUuzQqDt) under Prof. Wu ‘Scott’ YUAN at CUHK BME), a bistable origami robot designed for microneedle puncture (in partnership with Xu’s Lab (https://lnkd.in/gdb-5tTv), led by Prof. CHENJIE XU at CityU BME), and an ink-based stretching sensor inspired by kirigami principles. His talks highlighted the latest research progress in our group and the exciting potential of these technologies in soft robotics for medical applications.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿค๐Ÿค We were thrilled to host Prof. Nicolas Padoy from the CAMMA research group at the University of Strasbourg in our lab! ๐ŸŽ‰

During his visit, our team had the opportunity to showcase our latest research efforts across a wide range of topics in surgical intelligence and automation, including augmented reality-assisted surgery, motion magnification, safe dissection trajectory planning, 3D scene understanding & reconstruction, vision-language action & navigation, robotic OCT, intubation, and ESD systems.

The discussions were both warm and thought-provoking, as we exchanged ideas and explored potential synergies between our research efforts. Prof. Padoy’s insights and expertise added immense value to our ongoing projects, and weโ€™re excited about the possibilities for future collaboration.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ“ข We are pleased to share our recent work on advancing surgical guidance and automation for Robot-Assisted Endoscopic Submucosal Dissection (ESD). Both studies emphasize collaboration between clinicians and AI, aiming to improve surgical efficiency and reduce human error.

1๏ธโƒฃ #ICRA2025 – “ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-assisted Endoscopic Submucosal Dissection”

This work introduces a framework for predicting optimal dissection trajectories while integrating a confidence map-based safety margin to minimize risks like tissue perforation.

๐Ÿš€ Key contributions:

– A novel dataset (ETSM), featuring over 1,800 annotated clips from robotic ESD procedures.

– The RCMNet model, which uses regression to predict confidence maps, guiding surgeons toward safer dissection zones.

– Outperformed baselines with a mean absolute error of 3.18, demonstrating robust performance even under visual challenges.

Authors: Mengya Xu, Wenjin Mo, Guankun Wang, Huxin Gao, An Wang, Long Bai, Chaoyang Lyu, Xiaoxiao Yang, Zhen Li, and Hongliang Ren

๐Ÿ“šPaper link: https://lnkd.in/gUdr6XQc

2๏ธโƒฃ #IPCAI2025 – “PDZSeg: Adapting the Foundation Model for Dissection Zone Segmentation with Visual Prompts in Robot-assisted Endoscopic Submucosal Dissection”

This paper adapts foundation models to enable region-specific dissection zone segmentation using flexible visual prompts (e.g., scribbles, bounding boxes).

๐Ÿš€ Key insights:

– A novel ESD-DZSeg dataset tailored for prompt-based segmentation tasks.

– Fine-tuned the foundation model DINOv2 with LoRA for efficient adaptation to surgical tasks. Achieved state-of-the-art accuracy, with long scribble prompts yielding a mean Intersection over Union (IoU) of 74.06%.

– Empowers surgeons to intuitively refine segmentation through natural visual cues, enhancing real-time decision support.

Authors: Mengya Xu, Wenjin Mo, Guankun Wang, Huxin Gao, An Wang, Zhen Li, Xiaoxiao Yang and Hongliang Ren

๐Ÿ“šPaper link: https://lnkd.in/ggUM-5Zc

No alternative text description for this image
No alternative text description for this image