๐Ÿค๐Ÿค We were thrilled to host Prof. Nicolas Padoy from the CAMMA research group at the University of Strasbourg in our lab! ๐ŸŽ‰

During his visit, our team had the opportunity to showcase our latest research efforts across a wide range of topics in surgical intelligence and automation, including augmented reality-assisted surgery, motion magnification, safe dissection trajectory planning, 3D scene understanding & reconstruction, vision-language action & navigation, robotic OCT, intubation, and ESD systems.

The discussions were both warm and thought-provoking, as we exchanged ideas and explored potential synergies between our research efforts. Prof. Padoy’s insights and expertise added immense value to our ongoing projects, and weโ€™re excited about the possibilities for future collaboration.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ“ข We are pleased to share our recent work on advancing surgical guidance and automation for Robot-Assisted Endoscopic Submucosal Dissection (ESD). Both studies emphasize collaboration between clinicians and AI, aiming to improve surgical efficiency and reduce human error.

1๏ธโƒฃ #ICRA2025 – “ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-assisted Endoscopic Submucosal Dissection”

This work introduces a framework for predicting optimal dissection trajectories while integrating a confidence map-based safety margin to minimize risks like tissue perforation.

๐Ÿš€ Key contributions:

– A novel dataset (ETSM), featuring over 1,800 annotated clips from robotic ESD procedures.

– The RCMNet model, which uses regression to predict confidence maps, guiding surgeons toward safer dissection zones.

– Outperformed baselines with a mean absolute error of 3.18, demonstrating robust performance even under visual challenges.

Authors: Mengya Xu, Wenjin Mo, Guankun Wang, Huxin Gao, An Wang, Long Bai, Chaoyang Lyu, Xiaoxiao Yang, Zhen Li, and Hongliang Ren

๐Ÿ“šPaper link: https://lnkd.in/gUdr6XQc

2๏ธโƒฃ #IPCAI2025 – “PDZSeg: Adapting the Foundation Model for Dissection Zone Segmentation with Visual Prompts in Robot-assisted Endoscopic Submucosal Dissection”

This paper adapts foundation models to enable region-specific dissection zone segmentation using flexible visual prompts (e.g., scribbles, bounding boxes).

๐Ÿš€ Key insights:

– A novel ESD-DZSeg dataset tailored for prompt-based segmentation tasks.

– Fine-tuned the foundation model DINOv2 with LoRA for efficient adaptation to surgical tasks. Achieved state-of-the-art accuracy, with long scribble prompts yielding a mean Intersection over Union (IoU) of 74.06%.

– Empowers surgeons to intuitively refine segmentation through natural visual cues, enhancing real-time decision support.

Authors: Mengya Xu, Wenjin Mo, Guankun Wang, Huxin Gao, An Wang, Zhen Li, Xiaoxiao Yang and Hongliang Ren

๐Ÿ“šPaper link: https://lnkd.in/ggUM-5Zc

No alternative text description for this image
No alternative text description for this image

We are thrilled to share that our paper,ย “Fine-grained Classification Reveals Angiopathological Heterogeneity of Port Wine Stains Using OCT and OCTA Features”, has been accepted by theย IEEE Journal of Biomedical and Health Informatics (JBHI)! ๐ŸŽ‰ ๐ŸŽ‰

Port wine stains (PWS) are congenital vascular malformations that can lead to significant psychological and physical complications if untreated. Photodynamic therapy (PDT) is a common treatment but shows variable efficacy due to the heterogeneous vascular architecture of PWS lesions. Current diagnostic methods relying on skin surface appearance often fail to reflect underlying structural differences of PWS lesions, leading to erratic treatment effects.

Optical coherence tomography (#OCT) and OCT angiography (#OCTA) are promising tools for imaging PWS lesions. However, existing OCTA quantitative metrics cannot show significant differences among the various PWS subtypes based on clinical skin appearance diagnosis. This work proposes a fine-grained classification method for PWS using OCT and OCTA features. Compared with current clinical diagnosis based on skin appearance, the proposed method reveals the angiopathological heterogeneity of hypodermic PWS lesions and has the potential to provide more effective subtyping and treatment strategies for PWS.

Research team: Xiaofeng Deng, Defu Chen, Bowen Liu, Xiwan Zhang, Haixia Qiu, Wu ‘Scott’ YUAN, and Hongliang Ren.

Paper available at: 10.1109/JBHI.2025.3545931

No alternative text description for this image

๐Ÿš€ Endoscopic SLAM with 2D Gaussian Splatting! ๐Ÿฉบ๐Ÿ”ฌ

Our paper entitled “Advancing Dense Endoscopic Reconstruction with Gaussian Splatting-driven Surface Normal-aware Tracking and Mapping” has been accepted at #ICRA2025! ๐ŸŽ‰

Multi-view inconsistencies in 3D Gaussian Splatting (3DGS) have long limited precise depth and surface reconstruction in minimally invasive surgery. Our team (Yiming Huang *, Beilei Cui *, Long Bai *, Zhen Chen, Jinlin Wu, Zhen Li, Hongbin Liu, Hongliang Ren) from The Chinese University of Hong Kong and Centre for Artificial Intelligence and Robotics, Hong Kong Institute of Science & Innovation, CAS introduces Endo-2DTAMโ€”a real-time endoscopic SLAM system that combines 2DGS with a surface normal-aware pipeline to achieve geometrically accurate, high-quality reconstruction.

๐Ÿ”‘ Key Innovations:

โœ… Robust tracking via point-to-point/plane metrics

โœ… Surface enhancement using normal consistency & depth distortion

โœ… Pose-consistent keyframe sampling for coherence

๐Ÿ“ˆ Results:

โ—    1.87ยฑ0.63 mm RMSE in depth reconstruction

โ—    Real-time rendering & efficient computation

โ—    Open-source code: GitHub (https://lnkd.in/gyRr2YxS)

Paving the way for safer, smarter surgical robotics! ๐Ÿค–๐Ÿ’ก

diagram

๐ŸŽ‰ We are thrilled to share thatย Prof. Hongliang Ren, has been honored with the prestigiousย “Young Researcher Award 2023”ย by The Chinese University of Hong Kong (CUHK)! ๐Ÿ†

This recognition highlights Prof. Ren’s contributions to intelligent medical robotics and AI. Join us in congratulating Prof. Ren on this well-deserved achievement! ๐ŸŽ‰๐Ÿ‘

๐Ÿ”— Check the awards inauguration ceremony at https://lnkd.in/gEAcPecJ

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

Excited to share our paper entitled “Minimally Invasive Endotracheal Inside-out Flexible Needle Driving System” has been accepted for #ICRA2025!

– Background

Open tracheostomy (OT) remains the gold standard for managing airway obstruction; however, it is associated with stringent procedural requirements, scarring, and risks of infection. Percutaneous dilation tracheostomy (PDT), while more cost-effective, less invasive, and safer for surgeons, carries the risk of injuring the posterior tracheal wall and esophagus. Additionally, precise identification of tracheal rings and the optimal puncture site can be challenging.

– Our Contribution

To address these limitations and enhance the safety and simplicity of tracheostomy procedures, a minimally invasive, endotracheal inside-out flexible needle-driving system has been developed for microendoscope-guided robotic tracheostomy (MERT). This system integrates optical coherence tomography (OCT) and microendoscopic guidance to facilitate robotic insertion into the trachea, enabling an inside-out puncture with a flexible needle. The device is designed to operate through a standard endotracheal tube (ETT), and the puncture direction of the flexible needle is fully adjustable. Kinematic and static models of the flexible needle have been developed, and the system’s feasibility has been validated through porcine trachea puncture experiments.

This approach demonstrates significant potential for improving tracheostomy outcomes by minimizing invasiveness and enhancing procedural precision.

– Authors: Botao Lin, Sishen YUAN, Tinghua Zhang, Tao Zhang, Ruoyi Hao, Wu ‘Scott’ YUAN, Chwee Ming Lim, and Hongliang Ren

Looking forward to sharing our findings with the robotics and medical communities at #ICRA2025! ๐ŸŒ๐Ÿค–๐Ÿฉบ

No alternative text description for this image
No alternative text description for this image

๐ŸŒŸ Excited to share that our paper, “Variable-Stiffness Nasotracheal Intubation Robot with Passive Buffering: A Modular Platform in Mannequin Studies,” has been accepted for presentation at #ICRA2025! ๐ŸŒŸ

This work introduces a modular robotic platform designed to assist in nasotracheal intubation, a critical yet challenging procedure in clinical settings. While orotracheal intubation robots have seen significant development, nasotracheal intubation remains underdeveloped, particularly in terms of safety mechanisms. Our robot features a variable-stiffness fiberoptic bronchoscope (FOB) control module, enabling both low-stiffness mode for safe navigation through the nasal cavity and high-stiffness mode for enhanced load-bearing near the glottis. Additionally, a compact FOB feeding module with passive failure protection ensures controlled force application, minimizing risks during the procedure.

Weโ€™re grateful to our team (Ruoyi Hao, Sam, Jiewen Lai, Wenqi Zhong, Dihong Xie, Yu Tian, Tao Zhang, Zhang Yang, Catherine P. L. Chan, Jason Y. K. Chan, and Hongliang Ren) for their dedication and to #ICRA for this opportunity to share our work.

Looking forward to discussing this with the robotics and medical communities!

No alternative text description for this image
No alternative text description for this image

Weโ€™re excited to share that our research paper, “Improving Efficiency in Path Planning: Tangent Line Decomposition Algorithm”, led by Tian Yu, has been accepted at #ICRA2025! ๐ŸŽ‰

In this work, we introduce the Tangent Line Decomposition (TLD) algorithm, a new approach to finding collision-free paths in 2D polygon and 3D polyhedron environments.

TLD simplifies path planning by breaking it into smaller steps, focusing on one key obstacle at a time. Instead of building a complete graph, it uses a best-first search to reduce unnecessary computations. While the paths generated by TLD may not always be optimal, they can serve as a helpful starting point for other algorithms to refine further.

In our experiments, TLD showed improvements over the baseline LTA* method, achieving faster planning speeds in both 2D and 3D environments. The approach is also flexible, working in both convex and concave obstacle settings when combined with convex decomposition.

We look forward to sharing more details at ICRA 2025 and learning from others in the field!

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Exciting news ๐ŸŽ‰ Our lab has 7 papers accepted at #ICRA2025! We will be sharing with you all the papers in a series of posts. Stay tuned for the update!

In this post, we are pleased to share a paper entitled “Three-dimension Tip Force Perception and Axial Contact Location Identification for Flexible Endoscopy using Tissue-compliant Soft Distal Attachment Cap Sensors”, contributed by Zhang Tโ€ , Yang Yangโ€ , Yang Yang, Huxin Gao, Sam, Jiewen Lai and Hongliang Renโˆ—.

In endoluminal surgeries, inserting a flexible endoscope is one of the fundamental procedures. During this process, vision remains the primary feedback, while the perception of tactile magnitude and location is insufficient.

To address this issue, we propose a fiber Bragg grating (FBG)โ€“based tissue-compliant sensor cap with multi-mode sensing capabilities, including contact location identification at the terminal surface and the three-dimensional contact force perception at the tip. Utilizing the relative contact location information, operators can adjust the steerable segment of the endoscope when transitioning from one segment of a natural orifice to a narrower segment, which may be obstructed by constricted lumens(Fig.1).

The FBG-based sensor can perceive the tip contact force and identify the axial contact location with high precision, which is shown in Fig.2. The experimental results demonstrate the potential of the proposed sensing mechanism to be applied in surgeries requiring endoscope insertions.

Stay tuned for more research from our lab on endoscopic force sensing!