๐Ÿš€Thrilled to share our latest paper, entitled โ€œ๐—จ๐—ฝ๐—ฝ๐—ฒ๐—ฟ ๐—”๐—ถ๐—ฟ๐˜„๐—ฎ๐˜† ๐—”๐—ป๐—ฎ๐˜๐—ผ๐—บ๐—ถ๐—ฐ๐—ฎ๐—น ๐—Ÿ๐—ฎ๐—ป๐—ฑ๐—บ๐—ฎ๐—ฟ๐—ธ ๐——๐—ฎ๐˜๐—ฎ๐˜€๐—ฒ๐˜ ๐—ณ๐—ผ๐—ฟ ๐—”๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—•๐—ฟ๐—ผ๐—ป๐—ฐ๐—ต๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐˜† ๐—ฎ๐—ป๐—ฑ ๐—œ๐—ป๐˜๐˜‚๐—ฏ๐—ฎ๐˜๐—ถ๐—ผ๐—ปโ€ has been accepted and published online at ๐™Ž๐™˜๐™ž๐™š๐™ฃ๐™ฉ๐™ž๐™›๐™ž๐™˜ ๐˜ฟ๐™–๐™ฉ๐™–!

This work introduces a comprehensive dataset designed to advance AI-driven surgical robotics and medical imaging. By capturing detailed ๐—ฎ๐—ป๐—ฎ๐˜๐—ผ๐—บ๐—ถ๐—ฐ๐—ฎ๐—น ๐—น๐—ฎ๐—ป๐—ฑ๐—บ๐—ฎ๐—ฟ๐—ธ๐˜€ ๐—ผ๐—ณ ๐˜๐—ต๐—ฒ ๐˜‚๐—ฝ๐—ฝ๐—ฒ๐—ฟ ๐—ฎ๐—ถ๐—ฟ๐˜„๐—ฎ๐˜†, we aim to support safer, more accurate ๐—ฏ๐—ฟ๐—ผ๐—ป๐—ฐ๐—ต๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐˜† ๐—ฎ๐—ป๐—ฑ ๐—ถ๐—ป๐˜๐˜‚๐—ฏ๐—ฎ๐˜๐—ถ๐—ผ๐—ป procedures โ€” paving the way for improved patient outcomes and robust benchmarking in clinical AI.

๐Ÿ”‘ ๐—›๐—ถ๐—ด๐—ต๐—น๐—ถ๐—ด๐—ต๐˜๐˜€:

– First-of-its-kind dataset focused on airway anatomical landmarks

– Enables benchmarking for automated navigation and intubation tasks

– Openly available to foster collaboration across robotics, AI, and healthcare communities

We hope this resource will accelerate innovation in ๐—ฒ๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—ถ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ณ๐—ผ๐—ฟ ๐—ต๐—ฒ๐—ฎ๐—น๐˜๐—ต๐—ฐ๐—ฎ๐—ฟ๐—ฒ and inspire new interdisciplinary collaborations.

๐Ÿ‘‰ Read the full paper: https://rdcu.be/eS0d5

Grateful to all co-authors and collaborators from The Chinese University of Hong Kong (Ruoyi Hao, Zhiqing Tang, Catherine Po Ling Chan, Jason Ying Kuen Chan, Prof. Hongliang Ren), Hubei University of Technology (Zhang Yang), Huazhong University of Science and Technology (Yang Zhou), National University of Singapore (Lalithkumar Seenivasan), and Singapore General Hospital (Shuhui 

Xu, Neville Wei Yang Teo, Kaijun Tay, Vanessa Yee Jueen Tan, Jiun Fong Thong, Kimberley Liqin Kiong, Shaun Loh, Song Tar Toh, and Prof. Chwee Ming Lim), for making this possible. Excited to see how others build upon this foundation!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Thrilled to share that our paper โ€œ๐˜‰๐˜ณ๐˜ช๐˜ฅ๐˜จ๐˜ช๐˜ฏ๐˜จ ๐˜๐˜ช๐˜ด๐˜ช๐˜ฐ๐˜ฏ ๐˜ข๐˜ฏ๐˜ฅ ๐˜“๐˜ข๐˜ฏ๐˜จ๐˜ถ๐˜ข๐˜จ๐˜ฆ ๐˜ง๐˜ฐ๐˜ณ ๐˜™๐˜ฐ๐˜ฃ๐˜ถ๐˜ด๐˜ต ๐˜Š๐˜ฐ๐˜ฏ๐˜ต๐˜ฆ๐˜น๐˜ต-๐˜ˆ๐˜ธ๐˜ข๐˜ณ๐˜ฆ ๐˜š๐˜ถ๐˜ณ๐˜จ๐˜ช๐˜ค๐˜ข๐˜ญ ๐˜—๐˜ฐ๐˜ช๐˜ฏ๐˜ต ๐˜›๐˜ณ๐˜ข๐˜ค๐˜ฌ๐˜ช๐˜ฏ๐˜จ: ๐˜›๐˜ฉ๐˜ฆ ๐˜๐˜“-๐˜š๐˜ถ๐˜ณ๐˜จ๐˜—๐˜› ๐˜‹๐˜ข๐˜ต๐˜ข๐˜ด๐˜ฆ๐˜ต ๐˜ข๐˜ฏ๐˜ฅ ๐˜‰๐˜ฆ๐˜ฏ๐˜ค๐˜ฉ๐˜ฎ๐˜ข๐˜ณ๐˜ฌโ€ has been accepted to ๐—”๐—”๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ (๐—ข๐—ฟ๐—ฎ๐—น)! ๐ŸŽ‰

๐Ÿ’กThis work introduces ๐—ฉ๐—Ÿ-๐—ฆ๐˜‚๐—ฟ๐—ด๐—ฃ๐—ง, the first large-scale multimodal dataset that ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜จ๐˜ณ๐˜ข๐˜ต๐˜ฆ๐˜ด ๐˜ท๐˜ช๐˜ด๐˜ถ๐˜ข๐˜ญ ๐˜ต๐˜ณ๐˜ข๐˜ซ๐˜ฆ๐˜ค๐˜ต๐˜ฐ๐˜ณ๐˜ช๐˜ฆ๐˜ด ๐˜ธ๐˜ช๐˜ต๐˜ฉ ๐˜ด๐˜ฆ๐˜ฎ๐˜ข๐˜ฏ๐˜ต๐˜ช๐˜ค ๐˜ฑ๐˜ฐ๐˜ช๐˜ฏ๐˜ต ๐˜ด๐˜ต๐˜ข๐˜ต๐˜ถ๐˜ด ๐˜ฅ๐˜ฆ๐˜ด๐˜ค๐˜ณ๐˜ช๐˜ฑ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ด in surgical environments.

๐Ÿ”Alongside the dataset, we propose ๐—ง๐—š-๐—ฆ๐˜‚๐—ฟ๐—ด๐—ฃ๐—ง, a text-guided point tracking method that consistently outperforms vision-only approaches, especially under challenging intraoperative conditions such as smoke, occlusion, and tissue deformation.

๐Ÿ™ We are deeply grateful to all coauthors and especially our clinical collaborators at Shenzhen Peopleโ€™s Hospital for their invaluable contributions. Looking forward to engaging with the community at AAAI in Singapore and advancing the conversation on multimodal surgical AI!

Check the paper at https://lnkd.in/grfE5iVi

Project Page: https://lnkd.in/gscM_ciV

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŒ Highlights from the ๐Ÿฏ๐—ฟ๐—ฑ ๐—–๐Ÿฐ๐—ฆ๐—ฅ+ ๐—ช๐—ผ๐—ฟ๐—ธ๐˜€๐—ต๐—ผ๐—ฝ at #IROS2025 ๐ŸŒ

๐ŸŽ‰ Excited to share that the 3rd C4SR+ Workshop: ๐˜พ๐™ค๐™ฃ๐™ฉ๐™ž๐™ฃ๐™ช๐™ช๐™ข, ๐˜พ๐™ค๐™ข๐™ฅ๐™ก๐™ž๐™–๐™ฃ๐™ฉ, ๐˜พ๐™ค๐™ค๐™ฅ๐™š๐™ง๐™–๐™ฉ๐™ž๐™ซ๐™š, ๐˜พ๐™ค๐™œ๐™ฃ๐™ž๐™ฉ๐™ž๐™ซ๐™š ๐™Ž๐™ช๐™ง๐™œ๐™ž๐™˜๐™–๐™ก ๐™๐™ค๐™—๐™ค๐™ฉ๐™ž๐™˜ ๐™Ž๐™ฎ๐™จ๐™ฉ๐™š๐™ข๐™จ ๐™ž๐™ฃ ๐™ฉ๐™๐™š ๐™€๐™ข๐™—๐™ค๐™™๐™ž๐™š๐™™ ๐˜ผ๐™„ ๐™€๐™ง๐™– took place during #IROS2025 in Hangzhou, China.

This yearโ€™s workshop attracted ๐—ผ๐˜ƒ๐—ฒ๐—ฟ ๐—ผ๐—ป๐—ฒ ๐—ต๐˜‚๐—ป๐—ฑ๐—ฟ๐—ฒ๐—ฑ ๐—ฝ๐—ฎ๐—ฟ๐˜๐—ถ๐—ฐ๐—ถ๐—ฝ๐—ฎ๐—ป๐˜๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐—ฎ๐—ฐ๐—ฟ๐—ผ๐˜€๐˜€ ๐˜๐—ต๐—ฒ ๐—ด๐—น๐—ผ๐—ฏ๐—ฒ โ€” a fantastic turnout that reflects the growing momentum in ๐˜€๐˜‚๐—ฟ๐—ด๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ฒ๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—”๐—œ.

๐ŸŽค Distinguished Speakers

We were honored to host leading experts who shared their groundbreaking research and perspectives, including Prof. Nassir Navab from Technical University of Munich, Prof. Leonardo Mattos from Italian Institute of Technology, Prof. Mingchuan Zhou from Zhejiang University, Prof. Dandan Zhang from Imperial College London, Prof. Yunjie Yang from University of Edinburgh, and Prof. Guoying Gu from Shanghai Jiao Tong University.

๐Ÿ”‘ Key Themes Discussed

โ€ข Embodied AI in surgery and intelligent operating rooms

โ€ข Soft & continuum robotics for minimally invasive procedures

โ€ข Humanโ€“robot collaboration in clinical practice

โ€ข Cognitive surgical systems and decision-making

๐Ÿ† Workshop Contributions

– 9 oral and 9 poster paper presentations from emerging researchers

– Best Paper & Best Presentation Awards recognizing outstanding contributions

๐Ÿ™ A heartfelt thank you to all speakers, participants, and organizers who made this workshop such a success. The discussions and collaborations will continue to shape the future of surgical robotics.

๐Ÿ”— Learn more about the workshop and its highlights on the official page: https://lnkd.in/gswzMFAy

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ We had a fantastic time at ๐—–๐—ผ๐—ฅ๐—Ÿ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ (Seoul)โ€”presenting our paper ๐—˜๐—ป๐—ฑ๐—ผ๐—ฉ๐—Ÿ๐—”: ๐——๐˜‚๐—ฎ๐—น-๐—ฃ๐—ต๐—ฎ๐˜€๐—ฒ ๐—ฉ๐—ถ๐˜€๐—ถ๐—ผ๐—ป-๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ-๐—”๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐—ณ๐—ผ๐—ฟ ๐—ฃ๐—ฟ๐—ฒ๐—ฐ๐—ถ๐˜€๐—ฒ ๐—”๐˜‚๐˜๐—ผ๐—ป๐—ผ๐—บ๐—ผ๐˜‚๐˜€ ๐—ง๐—ฟ๐—ฎ๐—ฐ๐—ธ๐—ถ๐—ป๐—ด ๐—ถ๐—ป ๐—˜๐—ป๐—ฑ๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐˜† and catching up with friends in the robot-learning community.

The work couples a VLM backbone with a dual-phase training recipe (SFT โ†’ RFT) to turn language prompts into robust tracking and motor commands for a continuum endoscope.

A highlight was a meeting with Prof. Ken Goldbergโ€”especially meaningful since Prof. Ren previously served as a postdoc at UC Berkeley, keeping our CUHKโ†”๏ธŽUCB connection strong.

๐Ÿค Huge thanks to collaborators and everyone who stopped by the poster!

Authors: Chi Kit Ng*, Long Bai*, Guankun Wang*, Yupeng Wang, Huxin Gao, Kun Yuan, Chenhan Jin, Tieyong Zeng, Hongliang Renโ€จAffiliations: CUHK, TUM

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Prof. Hongliang Ren has co-organized and delivered a keynote talk at the ๐—–๐—ฅ๐—˜๐—”๐—ง๐—˜ ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€๐—ต๐—ผ๐—ฝ๐ŸŽค.

His keynote, “Robotic Endoscopy & Surgical Foundation Models,” dived deep into intelligent minimally invasive surgical robots. Specifically, he highlighted how advanced flexible surgical motion generation and perception capabilities are set to transform procedures with surgical foundation models.

A big thank you to all organizers, speakers, and participants for fostering such a vibrant exchange of ideas at the forefront of medical robotics. The collaboration and innovation underscore the immense potential of AI and robotics to improve patient outcomes and empower surgeons.

For workshop details: https://lnkd.in/gvCREK2m

https://sites.google.com/view/create-miccai-2025
No alternative text description for this image

๐ŸŽ‰ Thrilled to share that our work ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ง๐—ฃ๐—š๐—ฆ has been awarded with the ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐—ฌ๐—ผ๐˜‚๐—ป๐—ด ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ! ๐Ÿ†

๐Ÿ‘ Huge congratulations to YIMING HUANG, Long Bai, Beilei Cui, and our amazing co-authors Kun Yuan, Guankun Wang, Mobarak I. Hoque, Nicolas Padoy, Nassir Navab, and Hongliang Ren for their outstanding contributions.

๐Ÿค Beyond the award, itโ€™s been a wonderful gathering of our lab members and alumniโ€”celebrating collaboration, innovation, and the friendships that fuel our research.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Honored to highlight that Prof. Hongliang Ren delivered a keynote at the #EMA4MICCAI2025 Workshop!

His talk, โ€œ๐—˜๐—ป๐—ฑ๐—ผ๐—น๐˜‚๐—บ๐—ถ๐—ป๐—ฎ๐—น ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ๐˜€ & ๐—˜๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—”๐—œ ๐—ถ๐—ป ๐˜ƒ๐—ถ๐˜ƒ๐—ผ,โ€ shed light on how robotics and embodied intelligence are reshaping the future of minimally invasive interventions. The vision he shared opens exciting pathways for smarter, safer, and more adaptive medical technologies.

Grateful to the #EMA4MICCAI team for creating such a vibrant platform for exchanging ideas at the intersection of medical imaging, AI, and robotics.

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Thrilled to share that our paper, “SurgTPGS: Semantic 3D Surgical Scene Understanding with Text Promptable Gaussian Splatting”, has been nominated and shortlisted for the ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐—•๐—ฒ๐˜€๐˜ ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—ฎ๐—ป๐—ฑ ๐—ฌ๐—ผ๐˜‚๐—ป๐—ด ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ at #MICCAI2025!

In this study, we introduce ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ง๐—ฃ๐—š๐—ฆ, a novel framework that enables real-time, text-promptable 3D semantic querying in surgical environments. By integrating ๐˜ƒ๐—ถ๐˜€๐—ถ๐—ผ๐—ป-๐—น๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ with ๐—š๐—ฎ๐˜‚๐˜€๐˜€๐—ถ๐—ฎ๐—ป ๐—ฆ๐—ฝ๐—น๐—ฎ๐˜๐˜๐—ถ๐—ป๐—ด ๐—ฎ๐—ป๐—ฑ ๐˜€๐—ฒ๐—บ๐—ฎ๐—ป๐˜๐—ถ๐—ฐ-๐—ฎ๐˜„๐—ฎ๐—ฟ๐—ฒ ๐—ฑ๐—ฒ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐˜๐—ฟ๐—ฎ๐—ฐ๐—ธ๐—ถ๐—ป๐—ด, our method significantly improves the precision and efficiency of robotic-assisted surgery.

๐Ÿ“Œ Key Contributions:

โ€ข First text-promptable Gaussian Splatting for 3D surgical scenes

โ€ข Semantic-aware deformation tracking for dynamic anatomy

โ€ข Region-aware optimization for sharper segmentation and smoother reconstruction

โ€ข State-of-the-art results on CholecSeg8K and EndoVis18 datasets

Paving the way for smarter, safer surgical systems. Check out the full paper: https://lnkd.in/euGHFma5

Thanks and congrats to the amazing author team:

YIMING HUANG, Long Bai, Beilei Cui, Guankun Wang, Hongliang Ren (CUHK); Kun Yuan (Unistra, TUM), Nicolas Padoy (Unistra), Nassir Navab (TUM), and Mobarak I. Hoque (UCL).

diagram

๐ŸŽ™๏ธ Prof. Hongliang Ren presented at the 6th CCF China Intelligent Robot Academic Annual Meeting, sharing insights on โ€œ๐— ๐—ผ๐˜๐—ถ๐—ผ๐—ป ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ฎ๐—ป๐—ฑ ๐—ฃ๐—ฒ๐—ฟ๐—ฐ๐—ฒ๐—ฝ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—™๐—น๐—ฒ๐˜…๐—ถ๐—ฏ๐—น๐—ฒ ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐˜€ ๐—ถ๐—ป ๐— ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น๐—น๐˜† ๐—œ๐—ป๐˜ƒ๐—ฎ๐˜€๐—ถ๐˜ƒ๐—ฒ ๐—œ๐—ป๐˜๐—ฟ๐—ฎ๐—ฐ๐—ฎ๐˜ƒ๐—ถ๐˜๐˜† ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ฒ๐—ฟ๐˜†โ€.

๐Ÿง  ๐—ง๐—ฎ๐—น๐—ธ ๐—›๐—ถ๐—ด๐—ต๐—น๐—ถ๐—ด๐—ต๐˜๐˜€:

The presentation explored the challenges and opportunities in motion generation and perception for flexible robots operating in minimally invasive surgical environments. Prof. Ren emphasized the importance of image-guided robotic systems in enhancing surgical precision, flexibility, and repeatabilityโ€”while acknowledging the complexities these systems introduce in development.

He shared recent advances from our lab in intelligent motion planning and perception, aiming to enable smart micro-imaging and guided robotic interventions. The proposed remote robotic system is tailored for surgical applications, empowering clinicians with multi-modal sensing and continuous motion generation for dexterous operations.

No alternative text description for this image
No alternative text description for this image