๐Ÿš€ Prof. Hongliang Ren has co-organized and delivered a keynote talk at the ๐—–๐—ฅ๐—˜๐—”๐—ง๐—˜ ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€๐—ต๐—ผ๐—ฝ๐ŸŽค.

His keynote, “Robotic Endoscopy & Surgical Foundation Models,” dived deep into intelligent minimally invasive surgical robots. Specifically, he highlighted how advanced flexible surgical motion generation and perception capabilities are set to transform procedures with surgical foundation models.

A big thank you to all organizers, speakers, and participants for fostering such a vibrant exchange of ideas at the forefront of medical robotics. The collaboration and innovation underscore the immense potential of AI and robotics to improve patient outcomes and empower surgeons.

For workshop details: https://lnkd.in/gvCREK2m

https://sites.google.com/view/create-miccai-2025
No alternative text description for this image

๐ŸŽ‰ Thrilled to share that our work ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ง๐—ฃ๐—š๐—ฆ has been awarded with the ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐—ฌ๐—ผ๐˜‚๐—ป๐—ด ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ! ๐Ÿ†

๐Ÿ‘ Huge congratulations to YIMING HUANG, Long Bai, Beilei Cui, and our amazing co-authors Kun Yuan, Guankun Wang, Mobarak I. Hoque, Nicolas Padoy, Nassir Navab, and Hongliang Ren for their outstanding contributions.

๐Ÿค Beyond the award, itโ€™s been a wonderful gathering of our lab members and alumniโ€”celebrating collaboration, innovation, and the friendships that fuel our research.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Honored to highlight that Prof. Hongliang Ren delivered a keynote at the #EMA4MICCAI2025 Workshop!

His talk, โ€œ๐—˜๐—ป๐—ฑ๐—ผ๐—น๐˜‚๐—บ๐—ถ๐—ป๐—ฎ๐—น ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ๐˜€ & ๐—˜๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—”๐—œ ๐—ถ๐—ป ๐˜ƒ๐—ถ๐˜ƒ๐—ผ,โ€ shed light on how robotics and embodied intelligence are reshaping the future of minimally invasive interventions. The vision he shared opens exciting pathways for smarter, safer, and more adaptive medical technologies.

Grateful to the #EMA4MICCAI team for creating such a vibrant platform for exchanging ideas at the intersection of medical imaging, AI, and robotics.

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Thrilled to share that our paper, “SurgTPGS: Semantic 3D Surgical Scene Understanding with Text Promptable Gaussian Splatting”, has been nominated and shortlisted for the ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐—•๐—ฒ๐˜€๐˜ ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—ฎ๐—ป๐—ฑ ๐—ฌ๐—ผ๐˜‚๐—ป๐—ด ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ at #MICCAI2025!

In this study, we introduce ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ง๐—ฃ๐—š๐—ฆ, a novel framework that enables real-time, text-promptable 3D semantic querying in surgical environments. By integrating ๐˜ƒ๐—ถ๐˜€๐—ถ๐—ผ๐—ป-๐—น๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ with ๐—š๐—ฎ๐˜‚๐˜€๐˜€๐—ถ๐—ฎ๐—ป ๐—ฆ๐—ฝ๐—น๐—ฎ๐˜๐˜๐—ถ๐—ป๐—ด ๐—ฎ๐—ป๐—ฑ ๐˜€๐—ฒ๐—บ๐—ฎ๐—ป๐˜๐—ถ๐—ฐ-๐—ฎ๐˜„๐—ฎ๐—ฟ๐—ฒ ๐—ฑ๐—ฒ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐˜๐—ฟ๐—ฎ๐—ฐ๐—ธ๐—ถ๐—ป๐—ด, our method significantly improves the precision and efficiency of robotic-assisted surgery.

๐Ÿ“Œ Key Contributions:

โ€ข First text-promptable Gaussian Splatting for 3D surgical scenes

โ€ข Semantic-aware deformation tracking for dynamic anatomy

โ€ข Region-aware optimization for sharper segmentation and smoother reconstruction

โ€ข State-of-the-art results on CholecSeg8K and EndoVis18 datasets

Paving the way for smarter, safer surgical systems. Check out the full paper: https://lnkd.in/euGHFma5

Thanks and congrats to the amazing author team:

YIMING HUANG, Long Bai, Beilei Cui, Guankun Wang, Hongliang Ren (CUHK); Kun Yuan (Unistra, TUM), Nicolas Padoy (Unistra), Nassir Navab (TUM), and Mobarak I. Hoque (UCL).

diagram

๐ŸŽ™๏ธ Prof. Hongliang Ren presented at the 6th CCF China Intelligent Robot Academic Annual Meeting, sharing insights on โ€œ๐— ๐—ผ๐˜๐—ถ๐—ผ๐—ป ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ฎ๐—ป๐—ฑ ๐—ฃ๐—ฒ๐—ฟ๐—ฐ๐—ฒ๐—ฝ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—™๐—น๐—ฒ๐˜…๐—ถ๐—ฏ๐—น๐—ฒ ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐˜€ ๐—ถ๐—ป ๐— ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น๐—น๐˜† ๐—œ๐—ป๐˜ƒ๐—ฎ๐˜€๐—ถ๐˜ƒ๐—ฒ ๐—œ๐—ป๐˜๐—ฟ๐—ฎ๐—ฐ๐—ฎ๐˜ƒ๐—ถ๐˜๐˜† ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ฒ๐—ฟ๐˜†โ€.

๐Ÿง  ๐—ง๐—ฎ๐—น๐—ธ ๐—›๐—ถ๐—ด๐—ต๐—น๐—ถ๐—ด๐—ต๐˜๐˜€:

The presentation explored the challenges and opportunities in motion generation and perception for flexible robots operating in minimally invasive surgical environments. Prof. Ren emphasized the importance of image-guided robotic systems in enhancing surgical precision, flexibility, and repeatabilityโ€”while acknowledging the complexities these systems introduce in development.

He shared recent advances from our lab in intelligent motion planning and perception, aiming to enable smart micro-imaging and guided robotic interventions. The proposed remote robotic system is tailored for surgical applications, empowering clinicians with multi-modal sensing and continuous motion generation for dexterous operations.

No alternative text description for this image
No alternative text description for this image

๐Ÿš€Excited to share that our paper โ€œ๐„๐ง๐๐จ๐•๐‹๐€: ๐ƒ๐ฎ๐š๐ฅ-๐๐ก๐š๐ฌ๐ž ๐•๐ข๐ฌ๐ข๐จ๐ง-๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž-๐€๐œ๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ ๐Ÿ๐จ๐ซ ๐€๐ฎ๐ญ๐จ๐ง๐จ๐ฆ๐จ๐ฎ๐ฌ ๐“๐ซ๐š๐œ๐ค๐ข๐ง๐  ๐ข๐ง ๐„๐ง๐๐จ๐ฌ๐œ๐จ๐ฉ๐ฒโ€ has been accepted to the Conference on Robot Learning (๐‚๐จ๐‘๐‹) 2025!

In this project, we tackled the unique challenges of robotic endoscopy by integrating vision, language grounding, and motion planning into one end-to-end framework. EndoVLA enables:

– Precise polyp tracking through surgeon-issued prompts

– Delineation and following of abnormal mucosal regions

– Adherence to circumferential cutting markers during resections

We introduced a dual-phase training strategy:

1. ๐’๐ฎ๐ฉ๐ž๐ซ๐ฏ๐ข๐ฌ๐ž๐ ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  on our new ๐„๐ง๐๐จ๐•๐‹๐€-๐Œ๐จ๐ญ๐ข๐จ๐ง dataset

2. ๐‘๐ž๐ข๐ง๐Ÿ๐จ๐ซ๐œ๐ž๐ฆ๐ž๐ง๐ญ ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  with task-aware rewards

This approach impressively boosts tracking accuracy and achieves zero-shot generalization across diverse GI scenes.

The paper is available at: https://lnkd.in/g35DF7Fq

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Excitedย to share that our paper “๐‚๐จ๐๐„๐’๐ƒ: ๐€ ๐Œ๐ฎ๐ฅ๐ญ๐ข-๐‹๐ž๐ฏ๐ž๐ฅ ๐’๐ฎ๐ซ๐ ๐ข๐œ๐š๐ฅ ๐Œ๐จ๐ญ๐ข๐จ๐ง ๐ƒ๐š๐ญ๐š๐ฌ๐ž๐ญ ๐Ÿ๐จ๐ซ ๐“๐ซ๐š๐ข๐ง๐ข๐ง๐  ๐‹๐š๐ซ๐ ๐ž ๐•๐ข๐ฌ๐ข๐จ๐ง-๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ ๐ญ๐จ ๐‚๐จ-๐๐ข๐ฅ๐จ๐ญ ๐„๐ง๐๐จ๐ฌ๐œ๐จ๐ฉ๐ข๐œ ๐’๐ฎ๐›๐ฆ๐ฎ๐œ๐จ๐ฌ๐š๐ฅ ๐ƒ๐ข๐ฌ๐ฌ๐ž๐œ๐ญ๐ข๐จ๐ง” has been accepted to ๐€๐‚๐Œ ๐Œ๐Œ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ (๐ƒ๐š๐ญ๐š๐ฌ๐ž๐ญ ๐“๐ซ๐š๐œ๐ค)!

We built ๐‚๐จ๐๐„๐’๐ƒ to help AI better understand surgical workflowsโ€”especially the complex motions involved in ๐„๐ง๐๐จ๐ฌ๐œ๐จ๐ฉ๐ข๐œ ๐’๐ฎ๐›๐ฆ๐ฎ๐œ๐จ๐ฌ๐š๐ฅ ๐ƒ๐ข๐ฌ๐ฌ๐ž๐œ๐ญ๐ข๐จ๐ง (๐„๐’๐ƒ). The dataset includes:

๐Ÿ“น 35+ hours of annotated surgical videos

๐Ÿ–ผ๏ธ 17,679 labeled frames

๐Ÿ” 88,395 motion annotations across multiple levels

We designed a hierarchical annotation scheme to capture fine-grained surgical motions, especially focusing on the submucosal dissection phase. Our goal is to enable ๐ฏ๐ข๐ฌ๐ข๐จ๐ง-๐ฅ๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐ฆ๐จ๐๐ž๐ฅ๐ฌ that can one day assist surgeons in real-timeโ€”like a smart co-pilot in the OR.

Thanks to all collaborators (Guankun Wang, Han Xiao, Huxin Gao, Renrui Zhang, Long Bai, Xiaoxiao Yang, Zhen Li, Hongsheng Li,

Hongliang Ren) and institutions (CUHK, Shanghai AI Lab, Qilu Hospital of SDU) involved. Weโ€™re excited to see how this dataset can push forward research in ๐ฌ๐ฎ๐ซ๐ ๐ข๐œ๐š๐ฅ ๐€๐ˆ, ๐ซ๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ, and ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฆ๐จ๐๐š๐ฅ ๐ฅ๐ž๐š๐ซ๐ง๐ข๐ง๐ .

๐Ÿ“„ Check out the paper: https://lnkd.in/gkF6A4QY

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Excited to share that our paper โ€œ๐‘…๐‘’๐‘กโ„Ž๐‘–๐‘›๐‘˜๐‘–๐‘›๐‘” ๐ท๐‘Ž๐‘ก๐‘Ž ๐ผ๐‘š๐‘๐‘Ž๐‘™๐‘Ž๐‘›๐‘๐‘’ ๐‘–๐‘› ๐ถ๐‘™๐‘Ž๐‘ ๐‘  ๐ผ๐‘›๐‘๐‘Ÿ๐‘’๐‘š๐‘’๐‘›๐‘ก๐‘Ž๐‘™ ๐‘†๐‘ข๐‘Ÿ๐‘”๐‘–๐‘๐‘Ž๐‘™ ๐ผ๐‘›๐‘ ๐‘ก๐‘Ÿ๐‘ข๐‘š๐‘’๐‘›๐‘ก ๐‘†๐‘’๐‘”๐‘š๐‘’๐‘›๐‘ก๐‘Ž๐‘ก๐‘–๐‘œ๐‘›โ€ has been accepted by ๐Œ๐ž๐๐ข๐œ๐š๐ฅ ๐ˆ๐ฆ๐š๐ ๐ž ๐€๐ง๐š๐ฅ๐ฒ๐ฌ๐ข๐ฌ (IF 11.8)!

In this work, we tackled the challenge of training models that can keep learning new surgical instruments over time, without forgetting the old ones. Data imbalance made this especially tricky, so we proposed a plug-and-play framework that balances the data using inpainting and blending techniques, and introduced a new loss function to reduce confusion between similar-looking tools.

Big thanks to our amazing team (Shifang Zhao, Long Bai, Kun Yuan, Feng Li, Jieming YU, Wenzhen Dong, Guankun Wang, Prof. Mobarak I. Hoque, Prof. Nicolas Padoy, Prof. Nassir Navab, Prof. Hongliang Ren) from CUHK, TUM, Strasbourg, and UCL. This collaboration truly brought together ideas from different corners of the world ๐ŸŒ

The paper is now online: https://lnkd.in/gemZFNUK Code coming soon ๐Ÿ‘จโ€๐Ÿ’ป

๐Ÿ“ข ๐‚๐š๐ฅ๐ฅ ๐Ÿ๐จ๐ซ ๐๐š๐ฉ๐ž๐ซ๐ฌ: ๐ˆ๐‚๐๐ˆ๐‘ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐€๐ฎ๐ ๐ฎ๐ฌ๐ญ ๐Ÿ๐Ÿ”โ€“๐Ÿ๐Ÿ–, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, ๐†๐š๐ง๐ฌ๐ฎ, ๐‚๐ก๐ข๐ง๐š

Join us in the breathtaking landscapes of ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, China for the ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐‚๐จ๐ง๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐จ๐ง ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (๐ˆ๐‚๐๐ˆ๐‘), an affiliated event of the ๐๐Ÿ Elsevier journal ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (IF 5.4).

We welcome original contributions covering:

โ€ข Biomimetic design, materials & actuation

โ€ข Bio-inspired sensing, perception & navigation

โ€ข Learning-based control & embodied AI

โ€ข Soft & adaptive robotics

โ€ข Novel real-world applications integrating theory and practice

All accepted papers will be published by Elsevier and indexed in EI & Scopus. Top-ranked submissions will earn best-paper awards and invitations to submit expanded versions to Biomimetic Intelligence and Robotics and other leading journals.

๐Š๐ž๐ฒ ๐ƒ๐š๐ญ๐ž๐ฌ โ€ข Full-Paper (or Short Abstract) submissions due โ†’ July 20, 2025 โ€ข Acceptance notifications โ†’ August 1, 2025 โ€ข Registration & final manuscript โ†’ August 10, 2025

Learn more & submit at โ–ถ๏ธ http://www.icbir.org

Letโ€™s decode natureโ€™s genius and engineer the next generation of intelligent machinesโ€”together! ๐ŸŒฟ๐Ÿค–

text