The 3rd C4SR+ Workshop: Continuum, Compliant, Cooperative, Cognitive Surgical Robotic Systems in the Embodied Al Era.
Category: News
๐๏ธ Prof. Hongliang Ren presented at the 6th CCF China Intelligent Robot Academic Annual Meeting, sharing insights on โ๐ ๐ผ๐๐ถ๐ผ๐ป ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐ฎ๐ป๐ฑ ๐ฃ๐ฒ๐ฟ๐ฐ๐ฒ๐ฝ๐๐ถ๐ผ๐ป ๐ผ๐ณ ๐๐น๐ฒ๐ ๐ถ๐ฏ๐น๐ฒ ๐ฅ๐ผ๐ฏ๐ผ๐๐ ๐ถ๐ป ๐ ๐ถ๐ป๐ถ๐บ๐ฎ๐น๐น๐ ๐๐ป๐๐ฎ๐๐ถ๐๐ฒ ๐๐ป๐๐ฟ๐ฎ๐ฐ๐ฎ๐๐ถ๐๐ ๐ฆ๐๐ฟ๐ด๐ฒ๐ฟ๐โ.
๐ง ๐ง๐ฎ๐น๐ธ ๐๐ถ๐ด๐ต๐น๐ถ๐ด๐ต๐๐:
The presentation explored the challenges and opportunities in motion generation and perception for flexible robots operating in minimally invasive surgical environments. Prof. Ren emphasized the importance of image-guided robotic systems in enhancing surgical precision, flexibility, and repeatabilityโwhile acknowledging the complexities these systems introduce in development.
He shared recent advances from our lab in intelligent motion planning and perception, aiming to enable smart micro-imaging and guided robotic interventions. The proposed remote robotic system is tailored for surgical applications, empowering clinicians with multi-modal sensing and continuous motion generation for dexterous operations.
๐Excited to share that our paper โ๐๐ง๐๐จ๐๐๐: ๐๐ฎ๐๐ฅ-๐๐ก๐๐ฌ๐ ๐๐ข๐ฌ๐ข๐จ๐ง-๐๐๐ง๐ ๐ฎ๐๐ ๐-๐๐๐ญ๐ข๐จ๐ง ๐๐จ๐๐๐ฅ ๐๐จ๐ซ ๐๐ฎ๐ญ๐จ๐ง๐จ๐ฆ๐จ๐ฎ๐ฌ ๐๐ซ๐๐๐ค๐ข๐ง๐ ๐ข๐ง ๐๐ง๐๐จ๐ฌ๐๐จ๐ฉ๐ฒโ has been accepted to the Conference on Robot Learning (๐๐จ๐๐) 2025!
In this project, we tackled the unique challenges of robotic endoscopy by integrating vision, language grounding, and motion planning into one end-to-end framework. EndoVLA enables:
– Precise polyp tracking through surgeon-issued prompts
– Delineation and following of abnormal mucosal regions
– Adherence to circumferential cutting markers during resections
We introduced a dual-phase training strategy:
1. ๐๐ฎ๐ฉ๐๐ซ๐ฏ๐ข๐ฌ๐๐ ๐๐ข๐ง๐-๐ญ๐ฎ๐ง๐ข๐ง๐ on our new ๐๐ง๐๐จ๐๐๐-๐๐จ๐ญ๐ข๐จ๐ง dataset
2. ๐๐๐ข๐ง๐๐จ๐ซ๐๐๐ฆ๐๐ง๐ญ ๐๐ข๐ง๐-๐ญ๐ฎ๐ง๐ข๐ง๐ with task-aware rewards
This approach impressively boosts tracking accuracy and achieves zero-shot generalization across diverse GI scenes.
The paper is available at: https://lnkd.in/g35DF7Fq
๐ Excitedย to share that our paper “๐๐จ๐๐๐๐: ๐ ๐๐ฎ๐ฅ๐ญ๐ข-๐๐๐ฏ๐๐ฅ ๐๐ฎ๐ซ๐ ๐ข๐๐๐ฅ ๐๐จ๐ญ๐ข๐จ๐ง ๐๐๐ญ๐๐ฌ๐๐ญ ๐๐จ๐ซ ๐๐ซ๐๐ข๐ง๐ข๐ง๐ ๐๐๐ซ๐ ๐ ๐๐ข๐ฌ๐ข๐จ๐ง-๐๐๐ง๐ ๐ฎ๐๐ ๐ ๐๐จ๐๐๐ฅ๐ฌ ๐ญ๐จ ๐๐จ-๐๐ข๐ฅ๐จ๐ญ ๐๐ง๐๐จ๐ฌ๐๐จ๐ฉ๐ข๐ ๐๐ฎ๐๐ฆ๐ฎ๐๐จ๐ฌ๐๐ฅ ๐๐ข๐ฌ๐ฌ๐๐๐ญ๐ข๐จ๐ง” has been accepted to ๐๐๐ ๐๐ ๐๐๐๐ (๐๐๐ญ๐๐ฌ๐๐ญ ๐๐ซ๐๐๐ค)!
We built ๐๐จ๐๐๐๐ to help AI better understand surgical workflowsโespecially the complex motions involved in ๐๐ง๐๐จ๐ฌ๐๐จ๐ฉ๐ข๐ ๐๐ฎ๐๐ฆ๐ฎ๐๐จ๐ฌ๐๐ฅ ๐๐ข๐ฌ๐ฌ๐๐๐ญ๐ข๐จ๐ง (๐๐๐). The dataset includes:
๐น 35+ hours of annotated surgical videos
๐ผ๏ธ 17,679 labeled frames
๐ 88,395 motion annotations across multiple levels
We designed a hierarchical annotation scheme to capture fine-grained surgical motions, especially focusing on the submucosal dissection phase. Our goal is to enable ๐ฏ๐ข๐ฌ๐ข๐จ๐ง-๐ฅ๐๐ง๐ ๐ฎ๐๐ ๐ ๐ฆ๐จ๐๐๐ฅ๐ฌ that can one day assist surgeons in real-timeโlike a smart co-pilot in the OR.
Thanks to all collaborators (Guankun Wang, Han Xiao, Huxin Gao, Renrui Zhang, Long Bai, Xiaoxiao Yang, Zhen Li, Hongsheng Li,
Hongliang Ren) and institutions (CUHK, Shanghai AI Lab, Qilu Hospital of SDU) involved. Weโre excited to see how this dataset can push forward research in ๐ฌ๐ฎ๐ซ๐ ๐ข๐๐๐ฅ ๐๐, ๐ซ๐จ๐๐จ๐ญ๐ข๐๐ฌ, and ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฆ๐จ๐๐๐ฅ ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ .
๐ Check out the paper: https://lnkd.in/gkF6A4QY
๐ Excited to share that our paper โ๐ ๐๐กโ๐๐๐๐๐๐ ๐ท๐๐ก๐ ๐ผ๐๐๐๐๐๐๐๐ ๐๐ ๐ถ๐๐๐ ๐ ๐ผ๐๐๐๐๐๐๐๐ก๐๐ ๐๐ข๐๐๐๐๐๐ ๐ผ๐๐ ๐ก๐๐ข๐๐๐๐ก ๐๐๐๐๐๐๐ก๐๐ก๐๐๐โ has been accepted by ๐๐๐๐ข๐๐๐ฅ ๐๐ฆ๐๐ ๐ ๐๐ง๐๐ฅ๐ฒ๐ฌ๐ข๐ฌ (IF 11.8)!
In this work, we tackled the challenge of training models that can keep learning new surgical instruments over time, without forgetting the old ones. Data imbalance made this especially tricky, so we proposed a plug-and-play framework that balances the data using inpainting and blending techniques, and introduced a new loss function to reduce confusion between similar-looking tools.
Big thanks to our amazing team (Shifang Zhao, Long Bai, Kun Yuan, Feng Li, Jieming YU, Wenzhen Dong, Guankun Wang, Prof. Mobarak I. Hoque, Prof. Nicolas Padoy, Prof. Nassir Navab, Prof. Hongliang Ren) from CUHK, TUM, Strasbourg, and UCL. This collaboration truly brought together ideas from different corners of the world ๐
The paper is now online: https://lnkd.in/gemZFNUK Code coming soon ๐จโ๐ป
๐ข ๐๐๐ฅ๐ฅ ๐๐จ๐ซ ๐๐๐ฉ๐๐ซ๐ฌ: ๐๐๐๐๐ ๐๐๐๐ | ๐๐ฎ๐ ๐ฎ๐ฌ๐ญ ๐๐โ๐๐, ๐๐๐๐ | ๐๐ก๐๐ง๐ ๐ฒ๐, ๐๐๐ง๐ฌ๐ฎ, ๐๐ก๐ข๐ง๐
Join us in the breathtaking landscapes of ๐๐ก๐๐ง๐ ๐ฒ๐, China for the ๐๐๐๐ ๐๐ง๐ญ๐๐ซ๐ง๐๐ญ๐ข๐จ๐ง๐๐ฅ ๐๐จ๐ง๐๐๐ซ๐๐ง๐๐ ๐จ๐ง ๐๐ข๐จ๐ฆ๐ข๐ฆ๐๐ญ๐ข๐ ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ ๐๐ง๐ ๐๐จ๐๐จ๐ญ๐ข๐๐ฌ (๐๐๐๐๐), an affiliated event of the ๐๐ Elsevier journal ๐๐ข๐จ๐ฆ๐ข๐ฆ๐๐ญ๐ข๐ ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ ๐๐ง๐ ๐๐จ๐๐จ๐ญ๐ข๐๐ฌ (IF 5.4).
We welcome original contributions covering:
โข Biomimetic design, materials & actuation
โข Bio-inspired sensing, perception & navigation
โข Learning-based control & embodied AI
โข Soft & adaptive robotics
โข Novel real-world applications integrating theory and practice
All accepted papers will be published by Elsevier and indexed in EI & Scopus. Top-ranked submissions will earn best-paper awards and invitations to submit expanded versions to Biomimetic Intelligence and Robotics and other leading journals.
๐๐๐ฒ ๐๐๐ญ๐๐ฌ โข Full-Paper (or Short Abstract) submissions due โ July 20, 2025 โข Acceptance notifications โ August 1, 2025 โข Registration & final manuscript โ August 10, 2025
Learn more & submit at โถ๏ธ http://www.icbir.org
Letโs decode natureโs genius and engineer the next generation of intelligent machinesโtogether! ๐ฟ๐ค
๐ We are honored to share that our labโs paper, “PDZSeg: Adapting the Foundation Model for Dissection Zone Segmentation with Visual Prompts in Robot-Assisted Endoscopic Submucosal Dissection,” has been published at the International Journal of Computer Assisted Radiology and Surgery.
The paper is accepted for presentation at IPCAI2025, and weโre especially humbled to receive the ๐น๐๐จ ๐ฆ๐๐ฟ๐ฎ๐๐ฏ๐ผ๐๐ฟ๐ด ๐ฎ๐ป๐ฑ ๐ก๐๐น ๐๐ฒ๐ป๐ฐ๐ต ๐๐ผ ๐๐ฒ๐ฑ๐๐ถ๐ฑ๐ฒ ๐๐๐ฎ๐ฟ๐ฑ: ๐๐ผ๐ป๐ผ๐ฟ๐ฎ๐ฏ๐น๐ฒ ๐ ๐ฒ๐ป๐๐ถ๐ผ๐ป.
In this work, we address the challenge of accurately delineating the dissection zones during endoscopic submucosal dissection procedures. By integrating flexible visual cuesโsuch as scribbles and bounding boxesโdirectly onto surgical images, our PDZSeg model guides segmentation for both better precision and enhanced safety. Leveraging a state-of-the-art foundation model (DINOv2) and an efficient LoRA training strategy, we fine-tuned our approach on the specialized ESD-DZSeg dataset. Our experimental results show promising improvements over traditional methods, offering robust support for intraoperative guidance and remote surgical training.
Our sincere thanks to every colleague (Mengya Xu, Wenjin Mo, Guankun Wang, Huxin Gao, An Wang), mentor (Dr. Ning Zhong, Dr. Zhen Li, Dr. Xiaoxiao Yang, Prof. Hongliang Ren), and community member whose support has been indispensable. This achievement reaffirms our collective effort and inspires us to further refine robotic-assisted techniques towards enhanced safety and effectiveness.
Paper available at: https://lnkd.in/g7KcytnE
๐ Big Congratulations to Sishen YUAN on successfully defending his PhD thesis titled “๐๐๐๐๐๐ก๐๐ ๐๐๐๐๐๐๐ ๐ ๐๐๐๐ก๐ : ๐๐ฆ๐ ๐ก๐๐ ๐ท๐๐ ๐๐๐, ๐ถ๐๐๐ก๐๐๐ ๐๐๐ ๐๐๐๐๐ ๐๐๐ก๐๐๐๐๐ ๐ด๐๐๐๐๐๐๐ก๐๐๐๐ ”! ๐
Dr. Yuanโs research advances the field of medical robotics, paving the way for innovative healthcare solutions. A well-deserved milestone after years of dedication and research!
Special thanks to his supervisor: Prof. Hongliang Ren and Prof. Max Q.-H. Meng, as well as the examining committee members: Prof. Gao Shichang, Prof. Wu ‘Scott’ YUAN, and Prof. Zhidong Wang for their invaluable guidance and support.
๐ Learn more about Dr. Yuanโs research on https://lnkd.in/g7GVhwtv.
๐Excited to share our recent work accepted toย ๐๐๐ฏ๐๐ง๐๐๐ ๐๐จ๐๐จ๐ญ๐ข๐๐ฌ ๐๐๐ฌ๐๐๐ซ๐๐ก
“๐ฟ๐๐ค-๐๐ก๐๐๐๐ ๐น๐๐๐ฅ๐๐๐๐ ๐ผ๐๐-๐ต๐๐ ๐๐ ๐๐๐๐ ๐๐๐ ๐ธ๐๐๐๐๐ ๐ป๐ฆ๐๐๐๐๐๐๐ ๐ก๐๐ ๐ผ๐๐๐๐๐ก๐๐๐ ๐๐๐๐๐๐๐ก๐๐๐ ๐๐๐ ๐บ๐๐๐๐๐ก๐๐๐ ๐๐๐ก๐ก๐๐๐๐๐๐”
๐ What we explored:
– Designed ฮฉ-shaped flexible sensors using conductive ink to reduce strain mismatches on inflatable robots
– Developed a light-curing transfer method for precise sensor attachment
– Tested integration with balloon-type robots showing improved deformation tracking at >300% expansion
๐ฉบ Our sensor system has the potential to enable:
๐น Real-time inflation tracking for safer human-robot interaction
๐น Spatial perception in biomedical devices (catheters, surgical tools)
๐น 5x reduction in circumference/area error vs. conventional designs
Congrats to the authors: WENCHAO YUE, Shuoyuan Chen, Yan Ke, Yingyi Wen, Ruijie Tang, Guohua Hu, and Hongliang Ren.
๐ Paper Open Access at: https://lnkd.in/gm9DRybP
๐ Congratulations to our Ph.D. candidate, Long Bai, for successfully defending his doctoral dissertation on June 3, 2025! ๐
We extend our sincere gratitude to Prof. Tan Lee, Prof. Qi Dou, and Prof. S. Kevin Zhou for serving as examiners during Long Bai’s defense. Special thanks to his supervisors, Prof. Hongliang Ren and Prof. Jiewen Lai, for their invaluable guidance throughout his Ph.D. journey.
During his time at CUHK RenLab, Dr. Bai has made impressive contributions to surgical and medical artificial intelligence, particularly in multimodal AI.
๐ For more details about his research, visit his personal website: longbai-cuhk.github.io.
Wishing Dr. Long Bai all the best in his future endeavors! ๐๐