His keynote, “Robotic Endoscopy & Surgical Foundation Models,” dived deep into intelligent minimally invasive surgical robots. Specifically, he highlighted how advanced flexible surgical motion generation and perception capabilities are set to transform procedures with surgical foundation models.
A big thank you to all organizers, speakers, and participants for fostering such a vibrant exchange of ideas at the forefront of medical robotics. The collaboration and innovation underscore the immense potential of AI and robotics to improve patient outcomes and empower surgeons.
๐ค Beyond the award, itโs been a wonderful gathering of our lab members and alumniโcelebrating collaboration, innovation, and the friendships that fuel our research.
His talk, โ๐๐ป๐ฑ๐ผ๐น๐๐บ๐ถ๐ป๐ฎ๐น ๐ฅ๐ผ๐ฏ๐ผ๐๐ถ๐ฐ๐ & ๐๐บ๐ฏ๐ผ๐ฑ๐ถ๐ฒ๐ฑ ๐๐ ๐ถ๐ป ๐๐ถ๐๐ผ,โ shed light on how robotics and embodied intelligence are reshaping the future of minimally invasive interventions. The vision he shared opens exciting pathways for smarter, safer, and more adaptive medical technologies.
Grateful to the #EMA4MICCAI team for creating such a vibrant platform for exchanging ideas at the intersection of medical imaging, AI, and robotics.
In this study, we introduce ๐ฆ๐๐ฟ๐ด๐ง๐ฃ๐๐ฆ, a novel framework that enables real-time, text-promptable 3D semantic querying in surgical environments. By integrating ๐๐ถ๐๐ถ๐ผ๐ป-๐น๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐บ๐ผ๐ฑ๐ฒ๐น๐ with ๐๐ฎ๐๐๐๐ถ๐ฎ๐ป ๐ฆ๐ฝ๐น๐ฎ๐๐๐ถ๐ป๐ด ๐ฎ๐ป๐ฑ ๐๐ฒ๐บ๐ฎ๐ป๐๐ถ๐ฐ-๐ฎ๐๐ฎ๐ฟ๐ฒ ๐ฑ๐ฒ๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป ๐๐ฟ๐ฎ๐ฐ๐ธ๐ถ๐ป๐ด, our method significantly improves the precision and efficiency of robotic-assisted surgery.
๐ Key Contributions:
โข First text-promptable Gaussian Splatting for 3D surgical scenes
โข Semantic-aware deformation tracking for dynamic anatomy
โข Region-aware optimization for sharper segmentation and smoother reconstruction
โข State-of-the-art results on CholecSeg8K and EndoVis18 datasets
Paving the way for smarter, safer surgical systems. Check out the full paper: https://lnkd.in/euGHFma5
The presentation explored the challenges and opportunities in motion generation and perception for flexible robots operating in minimally invasive surgical environments. Prof. Ren emphasized the importance of image-guided robotic systems in enhancing surgical precision, flexibility, and repeatabilityโwhile acknowledging the complexities these systems introduce in development.
He shared recent advances from our lab in intelligent motion planning and perception, aiming to enable smart micro-imaging and guided robotic interventions. The proposed remote robotic system is tailored for surgical applications, empowering clinicians with multi-modal sensing and continuous motion generation for dexterous operations.
In this project, we tackled the unique challenges of robotic endoscopy by integrating vision, language grounding, and motion planning into one end-to-end framework. EndoVLA enables:
– Precise polyp tracking through surgeon-issued prompts
– Delineation and following of abnormal mucosal regions
– Adherence to circumferential cutting markers during resections
We introduced a dual-phase training strategy:
1. ๐๐ฎ๐ฉ๐๐ซ๐ฏ๐ข๐ฌ๐๐ ๐๐ข๐ง๐-๐ญ๐ฎ๐ง๐ข๐ง๐ on our new ๐๐ง๐๐จ๐๐๐-๐๐จ๐ญ๐ข๐จ๐ง dataset
2. ๐๐๐ข๐ง๐๐จ๐ซ๐๐๐ฆ๐๐ง๐ญ ๐๐ข๐ง๐-๐ญ๐ฎ๐ง๐ข๐ง๐ with task-aware rewards
This approach impressively boosts tracking accuracy and achieves zero-shot generalization across diverse GI scenes.
The paper is available at: https://lnkd.in/g35DF7Fq
We built ๐๐จ๐๐๐๐ to help AI better understand surgical workflowsโespecially the complex motions involved in ๐๐ง๐๐จ๐ฌ๐๐จ๐ฉ๐ข๐ ๐๐ฎ๐๐ฆ๐ฎ๐๐จ๐ฌ๐๐ฅ ๐๐ข๐ฌ๐ฌ๐๐๐ญ๐ข๐จ๐ง (๐๐๐). The dataset includes:
๐น 35+ hours of annotated surgical videos
๐ผ๏ธ 17,679 labeled frames
๐ 88,395 motion annotations across multiple levels
We designed a hierarchical annotation scheme to capture fine-grained surgical motions, especially focusing on the submucosal dissection phase. Our goal is to enable ๐ฏ๐ข๐ฌ๐ข๐จ๐ง-๐ฅ๐๐ง๐ ๐ฎ๐๐ ๐ ๐ฆ๐จ๐๐๐ฅ๐ฌ that can one day assist surgeons in real-timeโlike a smart co-pilot in the OR.
Hongliang Ren) and institutions (CUHK, Shanghai AI Lab, Qilu Hospital of SDU) involved. Weโre excited to see how this dataset can push forward research in ๐ฌ๐ฎ๐ซ๐ ๐ข๐๐๐ฅ ๐๐, ๐ซ๐จ๐๐จ๐ญ๐ข๐๐ฌ, and ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฆ๐จ๐๐๐ฅ ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ .
๐ Check out the paper: https://lnkd.in/gkF6A4QY
In this work, we tackled the challenge of training models that can keep learning new surgical instruments over time, without forgetting the old ones. Data imbalance made this especially tricky, so we proposed a plug-and-play framework that balances the data using inpainting and blending techniques, and introduced a new loss function to reduce confusion between similar-looking tools.
Join us in the breathtaking landscapes of ๐๐ก๐๐ง๐ ๐ฒ๐, China for the ๐๐๐๐ ๐๐ง๐ญ๐๐ซ๐ง๐๐ญ๐ข๐จ๐ง๐๐ฅ ๐๐จ๐ง๐๐๐ซ๐๐ง๐๐ ๐จ๐ง ๐๐ข๐จ๐ฆ๐ข๐ฆ๐๐ญ๐ข๐ ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ ๐๐ง๐ ๐๐จ๐๐จ๐ญ๐ข๐๐ฌ (๐๐๐๐๐), an affiliated event of the ๐๐ Elsevier journal ๐๐ข๐จ๐ฆ๐ข๐ฆ๐๐ญ๐ข๐ ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ ๐๐ง๐ ๐๐จ๐๐จ๐ญ๐ข๐๐ฌ (IF 5.4).
We welcome original contributions covering:
โข Biomimetic design, materials & actuation
โข Bio-inspired sensing, perception & navigation
โข Learning-based control & embodied AI
โข Soft & adaptive robotics
โข Novel real-world applications integrating theory and practice
All accepted papers will be published by Elsevier and indexed in EI & Scopus. Top-ranked submissions will earn best-paper awards and invitations to submit expanded versions to Biomimetic Intelligence and Robotics and other leading journals.
๐๐๐ฒ ๐๐๐ญ๐๐ฌ โข Full-Paper (or Short Abstract) submissions due โ July 20, 2025 โข Acceptance notifications โ August 1, 2025 โข Registration & final manuscript โ August 10, 2025
Learn more & submit at โถ๏ธ http://www.icbir.org
Letโs decode natureโs genius and engineer the next generation of intelligent machinesโtogether! ๐ฟ๐ค