๐Ÿš€ ๐—œ๐—๐—ฅ๐—ฅ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ: ๐—•๐—ถ๐—ผ๐—ถ๐—ป๐˜€๐—ฝ๐—ถ๐—ฟ๐—ฒ๐—ฑ ๐—š๐—ฟ๐—ฎ๐˜ƒ๐—ถ๐˜๐˜†-๐—”๐˜„๐—ฎ๐—ฟ๐—ฒ ๐—ฆ๐—ผ๐—ณ๐˜ ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐˜€! ๐Ÿค–๐Ÿชข

Thrilled to share our latest The International Journal of Robotics Research (IJRR) work on enabling ๐—ด๐—ฟ๐—ฎ๐˜ƒ๐—ถ๐˜๐˜†โ€‘๐—ฎ๐˜„๐—ฎ๐—ฟ๐—ฒ ๐—ฐ๐—ผ๐—ป๐˜๐—ฟ๐—ผ๐—น for ๐—ฝ๐—ผ๐—ฟ๐˜๐—ฎ๐—ฏ๐—น๐—ฒ ๐—ฐ๐—ฎ๐—ฏ๐—น๐—ฒโ€‘๐—ฑ๐—ฟ๐—ถ๐˜ƒ๐—ฒ๐—ป ๐˜€๐—ผ๐—ณ๐˜ ๐˜€๐—น๐—ฒ๐—ป๐—ฑ๐—ฒ๐—ฟ ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜๐˜€ โ€” using ๐—ผ๐—ป๐—น๐˜† ๐—ฎ ๐˜€๐—ถ๐—ป๐—ด๐—น๐—ฒ ๐—œ๐— ๐—จ and a powerful ๐—ฟ๐—ผ๐—ฏ๐—ผ๐—ฝ๐—ต๐˜†๐˜€๐—ถ๐—ฐ๐—ฎ๐—น simulationโ€‘driven framework.

Soft robots are lightweight and flexible, but their high aspect ratios make them ๐˜ฆ๐˜น๐˜ต๐˜ณ๐˜ฆ๐˜ฎ๐˜ฆ๐˜ญ๐˜บ sensitive to gravity, causing passive deformation that traditional kinematics just canโ€™t handle. This motivated us to rethink how soft robots can ๐˜ด๐˜ฆ๐˜ฏ๐˜ด๐˜ฆ and ๐˜ค๐˜ฐ๐˜ฎ๐˜ฑ๐˜ฆ๐˜ฏ๐˜ด๐˜ข๐˜ต๐˜ฆ for gravity โ€” without bulky sensors or complex hardware.

๐Ÿง โœจ ๐—ช๐—ต๐—ฎ๐˜ ๐˜„๐—ฒ ๐—ฑ๐—ฒ๐˜ƒ๐—ฒ๐—น๐—ผ๐—ฝ๐—ฒ๐—ฑ:

A ๐—ฏ๐—ถ๐—ผโ€‘๐—ถ๐—ป๐˜€๐—ฝ๐—ถ๐—ฟ๐—ฒ๐—ฑ ๐—ฟ๐—ฒ๐—ฎ๐—น๐Ÿฎ๐˜€๐—ถ๐—บ๐Ÿฎ๐—ฟ๐—ฒ๐—ฎ๐—น ๐—ฐ๐—ผ๐—ป๐˜๐—ฟ๐—ผ๐—น ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ถ๐˜๐—ฒ๐—ฐ๐˜๐˜‚๐—ฟ๐—ฒ that:

๐Ÿ”น Streams realโ€‘world IMU orientation into a realโ€‘time and robust SOFA simulation

๐Ÿ”น Dynamically reorients virtual gravity to mirror reality

๐Ÿ”น Uses QP optimization to compute jointโ€‘level compensation

๐Ÿ”น Executes compensation in both simulation and the physical robot

All of this โ€” using just ๐—ผ๐—ป๐—ฒ ๐—œ๐— ๐—จ. No strain sensors. No cameras. No expensive reconstruction systems.

๐ŸŽฏ ๐—ž๐—ฒ๐˜† ๐—ฅ๐—ฒ๐˜€๐˜‚๐—น๐˜๐˜€:

โœ… >99% compensation recovery in static tests

โœ… ~94% recovery in lowโ€‘motion dynamic tests

โœ… Demonstrated on different two-segment cableโ€‘driven soft robots

๐Ÿ’ก ๐—ช๐—ต๐˜† ๐—ถ๐˜ ๐—บ๐—ฎ๐˜๐˜๐—ฒ๐—ฟ๐˜€:

This work shows that soft robots can maintain ๐˜€๐˜๐—ฎ๐—ฏ๐—น๐—ฒ, ๐—ฐ๐—ผ๐—ป๐˜€๐—ถ๐˜€๐˜๐—ฒ๐—ป๐˜ ๐—ฐ๐—ผ๐—ป๐—ณ๐—ถ๐—ด๐˜‚๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐˜€ ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฟ ๐—ฐ๐—ต๐—ฎ๐—ป๐—ด๐—ถ๐—ป๐—ด ๐—ด๐—ฟ๐—ฎ๐˜ƒ๐—ถ๐˜๐˜† by integrating ๐˜ƒ๐—ถ๐—ฟ๐˜๐˜‚๐—ฎ๐—น ๐˜€๐—ฒ๐—ป๐˜€๐—ถ๐—ป๐—ด + ๐˜€๐—ถ๐—บ๐˜‚๐—น๐—ฎ๐˜๐—ถ๐—ผ๐—ปโ€‘๐—ฑ๐—ฟ๐—ถ๐˜ƒ๐—ฒ๐—ป ๐—ถ๐—ป๐˜ƒ๐—ฒ๐—ฟ๐˜€๐—ฒ ๐—ฐ๐—ผ๐—บ๐—ฝ๐˜‚๐˜๐—ฎ๐˜๐—ถ๐—ผ๐—ป. It reduces reliance on physical sensors and opens a pathway toward ๐˜€๐—ฐ๐—ฎ๐—น๐—ฎ๐—ฏ๐—น๐—ฒ, ๐—ด๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐—น๐—ถ๐˜‡๐—ฎ๐—ฏ๐—น๐—ฒ ๐—ด๐—ฟ๐—ฎ๐˜ƒ๐—ถ๐˜๐˜†โ€‘๐—ฎ๐˜„๐—ฎ๐—ฟ๐—ฒ ๐˜€๐—ผ๐—ณ๐˜ ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜๐˜€.

๐ŸŒฑ ๐—ช๐—ต๐—ฎ๐˜โ€™๐˜€ ๐—ป๐—ฒ๐˜…๐˜?

Weโ€™re exploring how to extend this architecture to virtualizable external force sensing and richer environmental interactions.

Special shoutout to the team โ€”

Jiewen Lai, Tian-Ao Ren (coโ€‘first authors),

Pengfei YE, Yanjun Liu, Jingyao Sun, Hongliang Ren โ€”

for making this project possible.

๐Ÿ”— Paper link: https://lnkd.in/gMgwCfvf

diagram

๐Ÿ“ขExcited to share that our latest research has been accepted by ๐—œ๐—˜๐—˜๐—˜ ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ๐˜€ & ๐—”๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐— ๐—ฎ๐—ด๐—ฎ๐˜‡๐—ถ๐—ป๐—ฒ!๐ŸŽ‰

๐—ง๐—ถ๐˜๐—น๐—ฒ: ๐—” ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ฒ๐—ป๐—ฑ๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐—ถ๐—ฐ ๐—ง๐—ฒ๐—น๐—ฒ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ ๐—ฆ๐˜†๐˜€๐˜๐—ฒ๐—บ ๐—จ๐˜€๐—ถ๐—ป๐—ด ๐—›๐—ฒ๐˜๐—ฒ๐—ฟ๐—ผ๐—ด๐—ฒ๐—ป๐—ฒ๐—ผ๐˜‚๐˜€ ๐—™๐—น๐—ฒ๐˜…๐—ถ๐—ฏ๐—น๐—ฒ ๐— ๐—ฎ๐—ป๐—ถ๐—ฝ๐˜‚๐—น๐—ฎ๐˜๐—ผ๐—ฟ๐˜€ ๐—ณ๐—ผ๐—ฟ ๐—•๐—ถ๐—บ๐—ฎ๐—ป๐˜‚๐—ฎ๐—น ๐—˜๐—ป๐—ฑ๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐—ถ๐—ฐ ๐—ฆ๐˜‚๐—ฏ๐—บ๐˜‚๐—ฐ๐—ผ๐˜€๐—ฎ๐—น ๐——๐—ถ๐˜€๐˜€๐—ฒ๐—ฐ๐˜๐—ถ๐—ผ๐—ป

๐Ÿ” ๐—•๐—ฎ๐—ฐ๐—ธ๐—ด๐—ฟ๐—ผ๐˜‚๐—ป๐—ฑ:

Endoscopic submucosal dissection (ESD) is a key technique for early GI cancer treatment, requiring high dexterity and precision.

๐Ÿ›  ๐—ช๐—ต๐—ฎ๐˜ ๐˜„๐—ฒ ๐—ฑ๐—ถ๐—ฑ:

We developed the first ๐—ต๐—ฒ๐˜๐—ฒ๐—ฟ๐—ผ๐—ด๐—ฒ๐—ป๐—ฒ๐—ผ๐˜‚๐˜€ ๐—ณ๐—น๐—ฒ๐˜…๐—ถ๐—ฏ๐—น๐—ฒ ๐—บ๐—ฎ๐—ป๐—ถ๐—ฝ๐˜‚๐—น๐—ฎ๐˜๐—ผ๐—ฟ๐˜€ (๐—›๐—™๐— ๐˜€) for bimanual ESD, integrating:

๐Ÿค– ๐—ฆ๐—ฒ๐—ฟ๐—ถ๐—ฎ๐—น ๐—”๐—ฟ๐˜๐—ถ๐—ฐ๐˜‚๐—น๐—ฎ๐˜๐—ฒ๐—ฑ ๐— ๐—ฎ๐—ป๐—ถ๐—ฝ๐˜‚๐—น๐—ฎ๐˜๐—ผ๐—ฟ (๐—ฆ๐—”๐— ) โ€“ for stable, multidirectional tissue traction

๐Ÿ”ฌ ๐—ฃ๐—ฎ๐—ฟ๐—ฎ๐—น๐—น๐—ฒ๐—น ๐—–๐—ผ๐—ป๐˜๐—ถ๐—ป๐˜‚๐˜‚๐—บ ๐—ช๐—ฟ๐—ถ๐˜€๐˜ (๐—ฃ๐—–๐—ช) โ€“ for accurate tissue dissection

๐Ÿ“ ๐—ž๐—ฒ๐˜† ๐—ฐ๐—ผ๐—ป๐˜๐—ฟ๐—ถ๐—ฏ๐˜‚๐˜๐—ถ๐—ผ๐—ป๐˜€:

โœ” Kinematic modeling using Denavitโ€“Hartenberg & Cosserat rod methods

โœ” Workspace & dexterity analysis via simulation

โœ” Validation through 16 ex vivo ESD tests

๐Ÿ’ก This work demonstrates a novel strategy for surgical roboticsโ€”leveraging heterogeneous structures to enhance flexibility, stiffness, and accuracy in minimally invasive procedures.

๐Ÿ‘ Kudos to our amazing team and collaborators from CUHK (Prof. Huxin Gao, Tao Zhang, Prof. Hongliang Ren), Qilu Hospital (Xiaoxiao Yang, Prof. ๅทฆ็ง€ไธฝ, Prof. Yanqing Li), Southern University of Science and Technology (Xiao Xiao, Prof. Qinghu Meng), and Beijing Institute of Technology (Prof. Changsheng Li)!

๐Ÿ“– Stay tuned for the full article in IEEE RAM!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Thrilled to share that our recent work has been honored with the ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ๐˜€ ๐—•๐—ฒ๐˜€๐˜ ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ at ๐—œ๐—˜๐—˜๐—˜ #๐—ฅ๐—ข๐—•๐—œ๐—ข๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ in Chengdu.

๐Ÿ† Paper: Contact-Aided Navigation of Flexible Robotic Endoscope Using Deep Reinforcement Learning in Dynamic Stomach

๐Ÿ‘ฉโ€๐Ÿ”ฌ Authors: Chi Kit Ng, Huxin Gao, Tianao Ren, Prof. Jiewen Lai, and Prof. Hongliang Ren

๐Ÿ” ๐—ช๐—ต๐˜† ๐—ถ๐˜ ๐—บ๐—ฎ๐˜๐˜๐—ฒ๐—ฟ๐˜€:

Navigating flexible robotic endoscopes in the dynamic, deformable stomach environment is a grand challenge. Our proposed Contact-Aided Navigation (CAN) strategy, powered by deep reinforcement learning and force-feedback, achieved:

โ€ข 100% success rate in both static and dynamic simulated stomach environments

โ€ข Average navigation error of just ๐Ÿญ.๐Ÿฒ ๐—บ๐—บ

โ€ข Robust generalization even under strong external disturbances

This work highlights how ๐—ฒ๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—”๐—œ ๐—ฎ๐—ป๐—ฑ ๐—ฏ๐—ถ๐—ผ๐—บ๐—ฒ๐—ฐ๐—ต๐—ฎ๐—ป๐—ถ๐—ฐ๐˜€-๐—ถ๐—ป๐˜€๐—ฝ๐—ถ๐—ฟ๐—ฒ๐—ฑ ๐˜€๐˜๐—ฟ๐—ฎ๐˜๐—ฒ๐—ด๐—ถ๐—ฒ๐˜€ can transform surgical robotics, enabling safer and more precise navigation in complex clinical environments.

Check the paper at https://lnkd.in/g6KgZTdD

๐Ÿ™ Huge thanks to the team, collaborators, and the broader robotics community for the support and inspiration.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€Thrilled to share our latest paper, entitled โ€œ๐—จ๐—ฝ๐—ฝ๐—ฒ๐—ฟ ๐—”๐—ถ๐—ฟ๐˜„๐—ฎ๐˜† ๐—”๐—ป๐—ฎ๐˜๐—ผ๐—บ๐—ถ๐—ฐ๐—ฎ๐—น ๐—Ÿ๐—ฎ๐—ป๐—ฑ๐—บ๐—ฎ๐—ฟ๐—ธ ๐——๐—ฎ๐˜๐—ฎ๐˜€๐—ฒ๐˜ ๐—ณ๐—ผ๐—ฟ ๐—”๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—•๐—ฟ๐—ผ๐—ป๐—ฐ๐—ต๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐˜† ๐—ฎ๐—ป๐—ฑ ๐—œ๐—ป๐˜๐˜‚๐—ฏ๐—ฎ๐˜๐—ถ๐—ผ๐—ปโ€ has been accepted and published online at ๐™Ž๐™˜๐™ž๐™š๐™ฃ๐™ฉ๐™ž๐™›๐™ž๐™˜ ๐˜ฟ๐™–๐™ฉ๐™–!

This work introduces a comprehensive dataset designed to advance AI-driven surgical robotics and medical imaging. By capturing detailed ๐—ฎ๐—ป๐—ฎ๐˜๐—ผ๐—บ๐—ถ๐—ฐ๐—ฎ๐—น ๐—น๐—ฎ๐—ป๐—ฑ๐—บ๐—ฎ๐—ฟ๐—ธ๐˜€ ๐—ผ๐—ณ ๐˜๐—ต๐—ฒ ๐˜‚๐—ฝ๐—ฝ๐—ฒ๐—ฟ ๐—ฎ๐—ถ๐—ฟ๐˜„๐—ฎ๐˜†, we aim to support safer, more accurate ๐—ฏ๐—ฟ๐—ผ๐—ป๐—ฐ๐—ต๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐˜† ๐—ฎ๐—ป๐—ฑ ๐—ถ๐—ป๐˜๐˜‚๐—ฏ๐—ฎ๐˜๐—ถ๐—ผ๐—ป procedures โ€” paving the way for improved patient outcomes and robust benchmarking in clinical AI.

๐Ÿ”‘ ๐—›๐—ถ๐—ด๐—ต๐—น๐—ถ๐—ด๐—ต๐˜๐˜€:

– First-of-its-kind dataset focused on airway anatomical landmarks

– Enables benchmarking for automated navigation and intubation tasks

– Openly available to foster collaboration across robotics, AI, and healthcare communities

We hope this resource will accelerate innovation in ๐—ฒ๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—ถ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ณ๐—ผ๐—ฟ ๐—ต๐—ฒ๐—ฎ๐—น๐˜๐—ต๐—ฐ๐—ฎ๐—ฟ๐—ฒ and inspire new interdisciplinary collaborations.

๐Ÿ‘‰ Read the full paper: https://rdcu.be/eS0d5

Grateful to all co-authors and collaborators from The Chinese University of Hong Kong (Ruoyi Hao, Zhiqing Tang, Catherine Po Ling Chan, Jason Ying Kuen Chan, Prof. Hongliang Ren), Hubei University of Technology (Zhang Yang), Huazhong University of Science and Technology (Yang Zhou), National University of Singapore (Lalithkumar Seenivasan), and Singapore General Hospital (Shuhui 

Xu, Neville Wei Yang Teo, Kaijun Tay, Vanessa Yee Jueen Tan, Jiun Fong Thong, Kimberley Liqin Kiong, Shaun Loh, Song Tar Toh, and Prof. Chwee Ming Lim), for making this possible. Excited to see how others build upon this foundation!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Thrilled to share that our paper โ€œ๐˜‰๐˜ณ๐˜ช๐˜ฅ๐˜จ๐˜ช๐˜ฏ๐˜จ ๐˜๐˜ช๐˜ด๐˜ช๐˜ฐ๐˜ฏ ๐˜ข๐˜ฏ๐˜ฅ ๐˜“๐˜ข๐˜ฏ๐˜จ๐˜ถ๐˜ข๐˜จ๐˜ฆ ๐˜ง๐˜ฐ๐˜ณ ๐˜™๐˜ฐ๐˜ฃ๐˜ถ๐˜ด๐˜ต ๐˜Š๐˜ฐ๐˜ฏ๐˜ต๐˜ฆ๐˜น๐˜ต-๐˜ˆ๐˜ธ๐˜ข๐˜ณ๐˜ฆ ๐˜š๐˜ถ๐˜ณ๐˜จ๐˜ช๐˜ค๐˜ข๐˜ญ ๐˜—๐˜ฐ๐˜ช๐˜ฏ๐˜ต ๐˜›๐˜ณ๐˜ข๐˜ค๐˜ฌ๐˜ช๐˜ฏ๐˜จ: ๐˜›๐˜ฉ๐˜ฆ ๐˜๐˜“-๐˜š๐˜ถ๐˜ณ๐˜จ๐˜—๐˜› ๐˜‹๐˜ข๐˜ต๐˜ข๐˜ด๐˜ฆ๐˜ต ๐˜ข๐˜ฏ๐˜ฅ ๐˜‰๐˜ฆ๐˜ฏ๐˜ค๐˜ฉ๐˜ฎ๐˜ข๐˜ณ๐˜ฌโ€ has been accepted to ๐—”๐—”๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ (๐—ข๐—ฟ๐—ฎ๐—น)! ๐ŸŽ‰

๐Ÿ’กThis work introduces ๐—ฉ๐—Ÿ-๐—ฆ๐˜‚๐—ฟ๐—ด๐—ฃ๐—ง, the first large-scale multimodal dataset that ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜จ๐˜ณ๐˜ข๐˜ต๐˜ฆ๐˜ด ๐˜ท๐˜ช๐˜ด๐˜ถ๐˜ข๐˜ญ ๐˜ต๐˜ณ๐˜ข๐˜ซ๐˜ฆ๐˜ค๐˜ต๐˜ฐ๐˜ณ๐˜ช๐˜ฆ๐˜ด ๐˜ธ๐˜ช๐˜ต๐˜ฉ ๐˜ด๐˜ฆ๐˜ฎ๐˜ข๐˜ฏ๐˜ต๐˜ช๐˜ค ๐˜ฑ๐˜ฐ๐˜ช๐˜ฏ๐˜ต ๐˜ด๐˜ต๐˜ข๐˜ต๐˜ถ๐˜ด ๐˜ฅ๐˜ฆ๐˜ด๐˜ค๐˜ณ๐˜ช๐˜ฑ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ด in surgical environments.

๐Ÿ”Alongside the dataset, we propose ๐—ง๐—š-๐—ฆ๐˜‚๐—ฟ๐—ด๐—ฃ๐—ง, a text-guided point tracking method that consistently outperforms vision-only approaches, especially under challenging intraoperative conditions such as smoke, occlusion, and tissue deformation.

๐Ÿ™ We are deeply grateful to all coauthors and especially our clinical collaborators at Shenzhen Peopleโ€™s Hospital for their invaluable contributions. Looking forward to engaging with the community at AAAI in Singapore and advancing the conversation on multimodal surgical AI!

Check the paper at https://lnkd.in/grfE5iVi

Project Page: https://lnkd.in/gscM_ciV

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŒ Highlights from the ๐Ÿฏ๐—ฟ๐—ฑ ๐—–๐Ÿฐ๐—ฆ๐—ฅ+ ๐—ช๐—ผ๐—ฟ๐—ธ๐˜€๐—ต๐—ผ๐—ฝ at #IROS2025 ๐ŸŒ

๐ŸŽ‰ Excited to share that the 3rd C4SR+ Workshop: ๐˜พ๐™ค๐™ฃ๐™ฉ๐™ž๐™ฃ๐™ช๐™ช๐™ข, ๐˜พ๐™ค๐™ข๐™ฅ๐™ก๐™ž๐™–๐™ฃ๐™ฉ, ๐˜พ๐™ค๐™ค๐™ฅ๐™š๐™ง๐™–๐™ฉ๐™ž๐™ซ๐™š, ๐˜พ๐™ค๐™œ๐™ฃ๐™ž๐™ฉ๐™ž๐™ซ๐™š ๐™Ž๐™ช๐™ง๐™œ๐™ž๐™˜๐™–๐™ก ๐™๐™ค๐™—๐™ค๐™ฉ๐™ž๐™˜ ๐™Ž๐™ฎ๐™จ๐™ฉ๐™š๐™ข๐™จ ๐™ž๐™ฃ ๐™ฉ๐™๐™š ๐™€๐™ข๐™—๐™ค๐™™๐™ž๐™š๐™™ ๐˜ผ๐™„ ๐™€๐™ง๐™– took place during #IROS2025 in Hangzhou, China.

This yearโ€™s workshop attracted ๐—ผ๐˜ƒ๐—ฒ๐—ฟ ๐—ผ๐—ป๐—ฒ ๐—ต๐˜‚๐—ป๐—ฑ๐—ฟ๐—ฒ๐—ฑ ๐—ฝ๐—ฎ๐—ฟ๐˜๐—ถ๐—ฐ๐—ถ๐—ฝ๐—ฎ๐—ป๐˜๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐—ฎ๐—ฐ๐—ฟ๐—ผ๐˜€๐˜€ ๐˜๐—ต๐—ฒ ๐—ด๐—น๐—ผ๐—ฏ๐—ฒ โ€” a fantastic turnout that reflects the growing momentum in ๐˜€๐˜‚๐—ฟ๐—ด๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ฒ๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—”๐—œ.

๐ŸŽค Distinguished Speakers

We were honored to host leading experts who shared their groundbreaking research and perspectives, including Prof. Nassir Navab from Technical University of Munich, Prof. Leonardo Mattos from Italian Institute of Technology, Prof. Mingchuan Zhou from Zhejiang University, Prof. Dandan Zhang from Imperial College London, Prof. Yunjie Yang from University of Edinburgh, and Prof. Guoying Gu from Shanghai Jiao Tong University.

๐Ÿ”‘ Key Themes Discussed

โ€ข Embodied AI in surgery and intelligent operating rooms

โ€ข Soft & continuum robotics for minimally invasive procedures

โ€ข Humanโ€“robot collaboration in clinical practice

โ€ข Cognitive surgical systems and decision-making

๐Ÿ† Workshop Contributions

– 9 oral and 9 poster paper presentations from emerging researchers

– Best Paper & Best Presentation Awards recognizing outstanding contributions

๐Ÿ™ A heartfelt thank you to all speakers, participants, and organizers who made this workshop such a success. The discussions and collaborations will continue to shape the future of surgical robotics.

๐Ÿ”— Learn more about the workshop and its highlights on the official page: https://lnkd.in/gswzMFAy

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ We had a fantastic time at ๐—–๐—ผ๐—ฅ๐—Ÿ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ (Seoul)โ€”presenting our paper ๐—˜๐—ป๐—ฑ๐—ผ๐—ฉ๐—Ÿ๐—”: ๐——๐˜‚๐—ฎ๐—น-๐—ฃ๐—ต๐—ฎ๐˜€๐—ฒ ๐—ฉ๐—ถ๐˜€๐—ถ๐—ผ๐—ป-๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ-๐—”๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐—ณ๐—ผ๐—ฟ ๐—ฃ๐—ฟ๐—ฒ๐—ฐ๐—ถ๐˜€๐—ฒ ๐—”๐˜‚๐˜๐—ผ๐—ป๐—ผ๐—บ๐—ผ๐˜‚๐˜€ ๐—ง๐—ฟ๐—ฎ๐—ฐ๐—ธ๐—ถ๐—ป๐—ด ๐—ถ๐—ป ๐—˜๐—ป๐—ฑ๐—ผ๐˜€๐—ฐ๐—ผ๐—ฝ๐˜† and catching up with friends in the robot-learning community.

The work couples a VLM backbone with a dual-phase training recipe (SFT โ†’ RFT) to turn language prompts into robust tracking and motor commands for a continuum endoscope.

A highlight was a meeting with Prof. Ken Goldbergโ€”especially meaningful since Prof. Ren previously served as a postdoc at UC Berkeley, keeping our CUHKโ†”๏ธŽUCB connection strong.

๐Ÿค Huge thanks to collaborators and everyone who stopped by the poster!

Authors: Chi Kit Ng*, Long Bai*, Guankun Wang*, Yupeng Wang, Huxin Gao, Kun Yuan, Chenhan Jin, Tieyong Zeng, Hongliang Renโ€จAffiliations: CUHK, TUM

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Prof. Hongliang Ren has co-organized and delivered a keynote talk at the ๐—–๐—ฅ๐—˜๐—”๐—ง๐—˜ ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€๐—ต๐—ผ๐—ฝ๐ŸŽค.

His keynote, “Robotic Endoscopy & Surgical Foundation Models,” dived deep into intelligent minimally invasive surgical robots. Specifically, he highlighted how advanced flexible surgical motion generation and perception capabilities are set to transform procedures with surgical foundation models.

A big thank you to all organizers, speakers, and participants for fostering such a vibrant exchange of ideas at the forefront of medical robotics. The collaboration and innovation underscore the immense potential of AI and robotics to improve patient outcomes and empower surgeons.

For workshop details: https://lnkd.in/gvCREK2m

https://sites.google.com/view/create-miccai-2025
No alternative text description for this image

๐ŸŽ‰ Thrilled to share that our work ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ง๐—ฃ๐—š๐—ฆ has been awarded with the ๐— ๐—œ๐—–๐—–๐—”๐—œ ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐—ฌ๐—ผ๐˜‚๐—ป๐—ด ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ! ๐Ÿ†

๐Ÿ‘ Huge congratulations to YIMING HUANG, Long Bai, Beilei Cui, and our amazing co-authors Kun Yuan, Guankun Wang, Mobarak I. Hoque, Nicolas Padoy, Nassir Navab, and Hongliang Ren for their outstanding contributions.

๐Ÿค Beyond the award, itโ€™s been a wonderful gathering of our lab members and alumniโ€”celebrating collaboration, innovation, and the friendships that fuel our research.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€ Honored to highlight that Prof. Hongliang Ren delivered a keynote at the #EMA4MICCAI2025 Workshop!

His talk, โ€œ๐—˜๐—ป๐—ฑ๐—ผ๐—น๐˜‚๐—บ๐—ถ๐—ป๐—ฎ๐—น ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐—ถ๐—ฐ๐˜€ & ๐—˜๐—บ๐—ฏ๐—ผ๐—ฑ๐—ถ๐—ฒ๐—ฑ ๐—”๐—œ ๐—ถ๐—ป ๐˜ƒ๐—ถ๐˜ƒ๐—ผ,โ€ shed light on how robotics and embodied intelligence are reshaping the future of minimally invasive interventions. The vision he shared opens exciting pathways for smarter, safer, and more adaptive medical technologies.

Grateful to the #EMA4MICCAI team for creating such a vibrant platform for exchanging ideas at the intersection of medical imaging, AI, and robotics.

No alternative text description for this image
No alternative text description for this image