๐ŸŽ‰ We are honored to share that our labโ€™s paper, “PDZSeg: Adapting the Foundation Model for Dissection Zone Segmentation with Visual Prompts in Robot-Assisted Endoscopic Submucosal Dissection,” has been published at the International Journal of Computer Assisted Radiology and Surgery.

The paper is accepted for presentation at IPCAI2025, and weโ€™re especially humbled to receive the ๐—น๐—›๐—จ ๐—ฆ๐˜๐—ฟ๐—ฎ๐˜€๐—ฏ๐—ผ๐˜‚๐—ฟ๐—ด ๐—ฎ๐—ป๐—ฑ ๐—ก๐——๐—น ๐—•๐—ฒ๐—ป๐—ฐ๐—ต ๐˜๐—ผ ๐—•๐—ฒ๐—ฑ๐˜€๐—ถ๐—ฑ๐—ฒ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ: ๐—›๐—ผ๐—ป๐—ผ๐—ฟ๐—ฎ๐—ฏ๐—น๐—ฒ ๐— ๐—ฒ๐—ป๐˜๐—ถ๐—ผ๐—ป.

In this work, we address the challenge of accurately delineating the dissection zones during endoscopic submucosal dissection procedures. By integrating flexible visual cuesโ€”such as scribbles and bounding boxesโ€”directly onto surgical images, our PDZSeg model guides segmentation for both better precision and enhanced safety. Leveraging a state-of-the-art foundation model (DINOv2) and an efficient LoRA training strategy, we fine-tuned our approach on the specialized ESD-DZSeg dataset. Our experimental results show promising improvements over traditional methods, offering robust support for intraoperative guidance and remote surgical training.

Our sincere thanks to every colleague (Mengya Xu, Wenjin Mo, Guankun Wang, Huxin Gao, An Wang), mentor (Dr. Ning Zhong, Dr. Zhen Li, Dr. Xiaoxiao Yang, Prof. Hongliang Ren), and community member whose support has been indispensable. This achievement reaffirms our collective effort and inspires us to further refine robotic-assisted techniques towards enhanced safety and effectiveness.

Paper available at: https://lnkd.in/g7KcytnE

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Big Congratulations to Sishen YUAN on successfully defending his PhD thesis titled “๐‘€๐‘Ž๐‘”๐‘›๐‘’๐‘ก๐‘–๐‘ ๐‘€๐‘’๐‘‘๐‘–๐‘๐‘Ž๐‘™ ๐‘…๐‘œ๐‘๐‘œ๐‘ก๐‘ : ๐‘†๐‘ฆ๐‘ ๐‘ก๐‘’๐‘š ๐ท๐‘’๐‘ ๐‘–๐‘”๐‘›, ๐ถ๐‘œ๐‘›๐‘ก๐‘Ÿ๐‘œ๐‘™ ๐‘Ž๐‘›๐‘‘ ๐‘‡๐‘Ÿ๐‘Ž๐‘›๐‘ ๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™ ๐ด๐‘๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘ ”! ๐Ÿš€

Dr. Yuanโ€™s research advances the field of medical robotics, paving the way for innovative healthcare solutions. A well-deserved milestone after years of dedication and research!

Special thanks to his supervisor: Prof. Hongliang Ren and Prof. Max Q.-H. Meng, as well as the examining committee members: Prof. Gao Shichang, Prof. Wu ‘Scott’ YUAN, and Prof. Zhidong Wang for their invaluable guidance and support.

๐Ÿ”— Learn more about Dr. Yuanโ€™s research on https://lnkd.in/g7GVhwtv.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€Excited to share our recent work accepted toย ๐€๐๐ฏ๐š๐ง๐œ๐ž๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ ๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก

“๐ฟ๐‘œ๐‘ค-๐‘†๐‘ก๐‘Ÿ๐‘Ž๐‘–๐‘› ๐น๐‘™๐‘’๐‘ฅ๐‘–๐‘๐‘™๐‘’ ๐ผ๐‘›๐‘˜-๐ต๐‘Ž๐‘ ๐‘’๐‘‘ ๐‘†๐‘’๐‘›๐‘ ๐‘œ๐‘Ÿ๐‘  ๐ธ๐‘›๐‘Ž๐‘๐‘™๐‘’ ๐ป๐‘ฆ๐‘๐‘’๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ ๐‘ก๐‘–๐‘ ๐ผ๐‘›๐‘“๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘ƒ๐‘’๐‘Ÿ๐‘๐‘’๐‘๐‘ก๐‘–๐‘œ๐‘› ๐‘‰๐‘–๐‘Ž ๐บ๐‘’๐‘œ๐‘š๐‘’๐‘ก๐‘Ÿ๐‘–๐‘ ๐‘ƒ๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ๐‘›๐‘–๐‘›๐‘””

๐Ÿ“„ What we explored:

– Designed ฮฉ-shaped flexible sensors using conductive ink to reduce strain mismatches on inflatable robots

– Developed a light-curing transfer method for precise sensor attachment

– Tested integration with balloon-type robots showing improved deformation tracking at >300% expansion

๐Ÿฉบ Our sensor system has the potential to enable:

๐Ÿ”น Real-time inflation tracking for safer human-robot interaction

๐Ÿ”น Spatial perception in biomedical devices (catheters, surgical tools)

๐Ÿ”น 5x reduction in circumference/area error vs. conventional designs

Congrats to the authors: WENCHAO YUE, Shuoyuan Chen, Yan Ke, Yingyi Wen, Ruijie Tang, Guohua Hu, and Hongliang Ren.

๐Ÿ“„ Paper Open Access at: https://lnkd.in/gm9DRybP

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Congratulations to our Ph.D. candidate, Long Bai, for successfully defending his doctoral dissertation on June 3, 2025! ๐ŸŽ‰

We extend our sincere gratitude to Prof. Tan Lee, Prof. Qi Dou, and Prof. S. Kevin Zhou for serving as examiners during Long Bai’s defense. Special thanks to his supervisors, Prof. Hongliang Ren and Prof. Jiewen Lai, for their invaluable guidance throughout his Ph.D. journey.

During his time at CUHK RenLab, Dr. Bai has made impressive contributions to surgical and medical artificial intelligence, particularly in multimodal AI.

๐Ÿ”— For more details about his research, visit his personal website: longbai-cuhk.github.io.

Wishing Dr. Long Bai all the best in his future endeavors! ๐Ÿš€๐Ÿ‘

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ ๐ŸŽ‰ We are thrilled to announce that our latest work on surgical workflow recognition has been accepted by the Information Fusion journal (IF: 14.8).

๐Ÿ—ž๏ธ We propose a multimodal Graph Representation network with Adversarial feature Disentanglement (GRAD) for robust surgical workflow recognition in challenging scenarios with domain shifts or corrupted data. Specifically, we introduce a Multimodal Disentanglement Graph Network (MDGNet) that captures fine-grained visual information while explicitly modeling the complex relationships between vision and kinematic embeddings through graph-based message modeling. To align feature spaces across modalities, we propose a Vision-Kinematic Adversarial (VKA) framework that leverages adversarial training to reduce modality gaps and improve feature consistency. Furthermore, we design a Contextual Calibrated Decoder, incorporating temporal and contextual priors to enhance robustness against domain shifts and corrupted data.

๐Ÿ—ž๏ธ Extensive comparative and ablation experiments demonstrate the effectiveness of our model and proposed modules. Specifically, we achieved an accuracy of 86.87% and 92.38% on two public datasets, respectively. Moreover, our robustness experiments show that our method effectively handles data corruption during storage and transmission, exhibiting excellent stability and robustness. Our approach aims to advance automated surgical workflow recognition, addressing the complexities and dynamism inherent in surgical procedures.

๐Ÿ“‘ Paper link: https://lnkd.in/g9SWYPJg

๐Ÿ‘ ๐Ÿ‘ We extend our gratitude to our co-authors and collaborators from around the world, including Long Bai, Boyi Ma, Ruohan Wang, Guankun Wang, Beilei Cui, Zhongliang Jiang, Mobarakol Islam, Zhe Min, Jiewen Lai, Nassir Navab, and Hongliang Ren!

No alternative text description for this image

๐Ÿš€ Call for Papers: ๐ˆ๐จ๐–๐“ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐Ÿš€

The ๐Ÿ๐ฌ๐ญ ๐–๐จ๐ซ๐ค๐ฌ๐ก๐จ๐ฉ ๐จ๐ง ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐ž๐ญ ๐จ๐Ÿ ๐–๐ž๐š๐ซ๐š๐›๐ฅ๐ž ๐“๐ก๐ข๐ง๐ ๐ฌ (๐ˆ๐จ๐–๐“ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“) will be held at The IEEE 11th World Forum on IoT in ๐‚๐ก๐ž๐ง๐ ๐๐ฎ, ๐‚๐ก๐ข๐ง๐š (๐Ž๐œ๐ญ ๐Ÿ๐Ÿ•โ€“๐Ÿ‘๐ŸŽ, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“)!

We invite submissions on AI-driven wearable systems, energy-efficient IoT, human-centric automation, and scalable intelligence.

๐Ÿ“… Key Dates:

๐Ÿ“ Submission Deadline: June 15, 2025

๐Ÿ“ข Notification: July 31, 2025

๐Ÿ“„ Camera-ready: August 15, 2025

๐Ÿ”— More details: IoWT 2025 Workshop (https://lnkd.in/gj_arCXX)

Our lab membersโ€™ achievements at the ๐— ๐—ฅ๐—– ๐—ฆ๐˜†๐—บ๐—ฝ๐—ผ๐˜€๐—ถ๐˜‚๐—บ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ

๐Ÿ† ๐‘ฉ๐’†๐’”๐’• ๐‘ซ๐’†๐’”๐’Š๐’ˆ๐’ ๐‘จ๐’˜๐’‚๐’“๐’…

Tinghua Zhang, Sishen YUAN et al. for “PneumaOCT: Pneumatic optical coherence tomography endoscopy for targeted distortion-free imaging in tortuous and narrow internal lumens”, a collaboration between CUHK ABI Lab (https://lnkd.in/gUuzQqDt) and RENLab (labren.org),

published in Science Advances (DOI: 10.1126/sciadv.adp3145).

๐Ÿ”ฌ ๐‘ฉ๐’†๐’”๐’• ๐‘จ๐’‘๐’‘๐’๐’Š๐’„๐’‚๐’•๐’Š๐’๐’ ๐‘จ๐’˜๐’‚๐’“๐’…

Dr. Mengya Xu, Wenjin Mo et al. for their work:

“ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-assisted Endoscopic Submucosal Dissection”, accepted at #ICRA2025 (arXiv preprint: arXiv:2411.18884).

๐ŸŒŸ Congratulations to our brilliant team members on these well-deserved recognitions!

Additionally, Prof. Hongliang Ren delivered an insightful talk, “Endoscopic Multisensory Navigation with Soft Flexible Robotics”, highlighting the latest advancements in endoscopic navigation and soft medical robotics.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ“ข ๐‚๐š๐ฅ๐ฅ ๐Ÿ๐จ๐ซ ๐๐š๐ฉ๐ž๐ซ๐ฌ โ€“ ๐ˆ๐„๐„๐„ ๐ˆ๐‚๐ˆ๐€ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐Ÿš€ Join us at the 2025 International Conference on Information and Automation (๐ˆ๐‚๐ˆ๐€ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“) in Lanzhou, China, from August 28โ€“31, 2025!

This conference serves as a platform for researchers and practitioners to discuss advancements, challenges, and opportunities in information, automation, artificial intelligence, robotics, image processing, computer vision, DSP, and BME.

๐Ÿ“… Submission Deadline: June 15, 2025 ๐Ÿ”—

Conference Website: http://www.icia2025.org/

๐ŸŽ‰ Excited to share our paper titled “Disentangling Contact Location for Stretchable Tactile Sensors from Soft Waveguide Ultrasonic Scatter Signals”, published in Advanced Intelligent Systems.

In this work, we tackled a longโ€standing challenge in soft tactile sensingโ€”accurately localizing a contact point on a stretchable sensor even in the presence of strain and variable contact forces. Our approach uses ultrasonic scatter signals extracted from a soft waveguide to decouple these intertwined effects. A data-driven method was developed, combining:

– Global feature extraction: Using the Hilbert transform to capture the overall energy distribution before and after force contact.

– Local feature extraction: Leveraging continuous wavelet transforms (CWT) to retrieve high-resolution timeโ€“frequency characteristics.

– Deep learning integration: Fusing these features through a deep convolutional neural network and multilayer perceptron regression, which allowed us to achieve a mean absolute error of just 0.627 mm and a mean relative error of 3.19%.

This fusion of global and local signal analysis not only overcomes limitations of traditional time-of-flight estimation methods but also paves the way for more robust multimodal sensing in robotics and humanโ€“machine interfaces. The implications for advanced robotics, intelligent prosthetics, and other emerging applications are truly exciting.

Authors: Zhiheng Li, Yuan Lin, Peter ShullHongliang Ren

The paper is available at https://lnkd.in/dHHgw4qp

diagram

Six of our lab members arrived in Atlanta & will be at ICRA presenting the following 7 papers

1)ย ย ย ย ย Three-Dimension Tip Force Perception and Axial Contact Location Identification for Flexible Endoscopy Using Tissue-Compliant Soft Distal Attachment Cap Sensors
2)ย ย ย ย ย TASEC: Rescaling Acquisition Strategy with Energy Constraints under Fusion Kernel for Active Incision Recommendation in Tracheotomy
3)ย ย ย ย ย ETSM: Automating Dissection Trajectory Suggestion and Confidence Map-Based Safety Margin Prediction for Robot-Assisted Endoscopic Submucosal Dissection
4)ย ย ย ย ย Advancing Dense Endoscopic Reconstruction with Gaussian Splatting-Driven Surface Normal-Aware Tracking and Mapping
5)ย ย ย ย ย Minimally Invasive Endotracheal Inside-Out Flexible Needle Driving System towards Microendoscope-Guided Robotic Tracheostomy
6)ย ย ย ย ย Variable-Stiffness Nasotracheal Intubation Robot with Passive Buffering: A Modular Platform in Mannequin Studies
7)ย ย ย ย ย SurgPLAN++: Universal Surgical Phase Localization Network for Online and Offline Inference
Looking forward to catching up!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image