During the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024, https://lnkd.in/gaJ_PWPC) in Abu Dhabi, Prof. Hongliang Ren deliveredย two invited talks on multisensory learning and surgical motion generation and perception.

Workshop 1: Multisensory Transparency-Augmented Teleoperation in Extreme Environments (https://lnkd.in/gcgvKa_h)

Talk Title: Multisensory augmentation and learning of robotic teleoperation in surgical environments

Workshop 2: 2nd Workshop on Machine Learning in Medical Robotics: Bridging ML Theory and Clinical Frontiers (https://lnkd.in/gCK5fVqW)

Talk Title: Compliant surgical motion generation and perception towards intelligent minimally invasive robotic procedures

Additionally, we have two papers on neurosurgical robots and OCT presented at the main conference, including

Paper 1: Head-Mounted Hydraulic Needle Driver for Targeted Interventions in Neurosurgery 

Authors: Zhiwei Fang, Chao (Oliver) Xu, Huxin Gao, Danny Tat-Ming Chan, Prof. Wu ‘Scott’ YUAN, Prof. Hongliang Ren

Paper 2: Towards Electricity-Free Pneumatic Miniature Rotation Actuator for Optical Coherence Tomography (OCT) Endoscopy

Authors: Tinghua Zhang, Sishen YUAN, Chao (Oliver) Xu, Peng Liu, Prof. Hongliang Ren, Prof. Wu ‘Scott’ YUAN

These works are based on strong collaboration between LabREN (http://www.labren.org/mm/) and ABILab (https://lnkd.in/gUuzQqDt). We are looking forward to continuing our collaborative efforts in these areas.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

Prof. Hongliang Ren gave a talk in the 2nd Hong Kong Clinical-Driven Robotics and Embodied AI TEchnology (#CREATE) Symposium

During the 2nd Hong Kong Clinical-Driven Robotics and Embodied AI TEchnology (#CREATE) Symposium held by Centre for Artificial Intelligence and Robotics, Hong Kong Institute of Science & Innovation, CAS, our lab lead Prof. Hongliang Ren gave a talk entiled “Compliant Endoscopic Multisensory Guidance With Soft Flexible Robotics”, featuring our recent research progress in multisensory perception for endoluminal soft robotic operation.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

This Saturday, our lab was honored to host esteemed guests, Prof. Nassir Navab from the Technical University of Munich and Prof. ็Ž‹ๆ’ from the ๅŽๅ—็†ๅทฅๅคงๅญฆ. Our lab members took this opportunity to showcase our recent research across a diverse range of fields.

A particular highlight was our effort towards automated and intelligent Endoscopic Submucosal Dissection (ESD) surgery, empowered by our indigenous DREAMS (Dual-arm Robotic Endoscopic Assistant for Minimally Invasive Surgery, https://lnkd.in/gQ3PcMsn) platform. We demonstrated how this platform can enhance ESD procedures when equipped with advanced features such as precise trajectory planning, intelligent cutting decision support, accurate reconstruction, and granular analysis and prediction of motion (https://lnkd.in/gkF6A4QY) empowered by Large Visual-Language Models (LVLM).

In addition to our progress in ESD, we also presented other projects, including surgical scene reconstruction & depth estimation (https://lnkd.in/gviyHxAp, https://lnkd.in/gtDwbyWg), augmented reality applications in surgery, the skull-mounted neuro-interventional robot (SkullBot, https://lnkd.in/gn8N9zdU), and OCT for port-wine stain analysis (https://lnkd.in/gm6iH3-m).

This visit not only provided an opportunity for knowledge exchange and collaboration but also reaffirmed our lab’s commitment to pushing the boundaries of innovation in the fields of medical robotics and intelligent surgical technologies. We look forward to furthering these conversations and forging new partnerships that will drive the future of healthcare forward.

No alternative text description for this image

๐ŸŽ‰ CUHK Information Day for Undergraduate Admissions 2024 ๐ŸŽ‰

On the Info Day for Undergraduate Admissions 2024, we were thrilled to see a big crowd of visitors joining the admission talk and special event โ€œ๐–๐ž๐ฅ๐ฅ-๐‘๐จ๐ฎ๐ง๐๐ž๐ ๐„๐ง๐ ๐ข๐ง๐ž๐ž๐ซ ๐ˆ๐ง๐œ๐ฎ๐›๐š๐ญ๐ข๐จ๐ง ๐’๐ก๐จ๐ฐ๐ซ๐จ๐จ๐ฆโ€ to learn the experience in developing APP / Products / Services.

Our lab member, Sishen YUAN, along with others, showcased our progress in augmented reality for neuro-interventional head-mounted robotics (SkullBot), attracting a lot of interest from the visitors and sparking engaging conversations about the future of surgical technology.

Thank you to everyone who joined us and showed interest in our work. We look forward to more opportunities to share our passion and advancements with the surgical robotics community!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

**Exploring Origami Crawlers: Unleashing the Potential of Confined Space Robotics**

We are thrilled to share a new research paper that’s just been published in *Communications Engineering*, showcasing a remarkable advancement in the field of origami-inspired tiny robots. ๐Ÿ“ฐโœจ

**๐Ÿ”— [Untethered Bistable Origami Crawler for Confined Applications](https://lnkd.in/gpJHEM-q)

### What’s the Buzz About?

This research introduces a **magnetically actuated bistable origami crawler**, a miniature robot designed to navigate and perform tasks in confined spaces which are challenging for traditional tethered or wired devices. ๐Ÿš€

### Key Highlights:

– **Shape-Morphing Capability**: The crawler can transform between an undeployed locomotion state and a deployed load-bearing state, thanks to its bistable design. ๐Ÿ”ง

– **Robust Locomotion**: Utilizes out-of-plane crawling for bi-directional locomotion and navigation, exhibiting robust navigation even in high-friction environments. ๐Ÿ›ค๏ธ

– **Load-Bearing Applications**: The deployed state allows the crawler to execute tasks like microneedle insertion, opening up possibilities for medical interventions. ๐Ÿฉบ

– **Untethered Operation**: Equipped with internal permanent magnets, this crawler operates without the need for external tethers, enhancing its maneuverability and miniaturizability. ๐Ÿช

### Why It Matters:

This technology could provide an alternative approach to solve problems encountered in confined environments, from medical procedures in the gastrointestinal tract to complex engineering tasks in tight spots. ๐ŸŒ

### What’s Next:

The concept proposed in this work can also be adapted and applied to a variety of other deployable and load-bearing applications, such as fluid collection, stents, and airway support mechanisms. The proposed mechanism design can also be potentially integrated with other actuation methods like pneumatic systems for larger scale applications. ๐ŸŒ 

### Join the Conversation:

This work is a collaborative effort between Dr. Catherine Cai from National University of Singapore, Dr. Hui Huang from A*STAR – Agency for Science, Technology and Research and Prof. Hongliang Ren from The Chinese University of Hong Kong.

We’re eager to hear your thoughts on what we hope is an innovative research! How do you envision this technology being used in your field? Share your ideas and let’s discuss the future of robotics and origami engineering! ๐Ÿค–๐Ÿ“š

*Don’t forget to check out the full paper for a deep dive into the mechanics, applications, and implications of this incredible new technology. It’s a must-read for anyone interested in the cutting edge of robotics and engineering innovation!* ๐Ÿ“–๐Ÿ’ก

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

**Exciting News! ๐ŸŽ‰**

We’re delighted to share that 6 papers from our lab have been accepted at IEEE ROBIO 2024 (https://lnkd.in/gXQYD7By)!

Our team has made significant contributions to the field of robotic surgery, with a diverse range of research topics

including:

– Neural Rendering

– Gaussian Splatting

– Registration and Reconstruction

– Augmented Reality for surgical planning and training

– Soft Continuum Robots

– Vision-guided surgery

Get a sneak peek at our digest figures below. We’ll be sharing more details soon! If you’re attending #ROBIO2024 in Bangkok,

we’d love to connect and discuss potential collaborations!

Let’s build connections and drive innovation together!

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰Exciting News๐ŸŽ‰ We have made great achievements at #MICCAI2024!

Our paper entitled “LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion”, led by our former lab member Tong Chen, Qingcheng Lyu, and our PhD student Long Bai, has been awarded the ๐ŸŽŠMICCAI 2024 Best Paper Runner-Up!๐ŸŽŠ

This is a collaborative work between the lab of Prof Luping Zhou from The University of Sydney, our #LabREN (http://www.labren.org/mm/), and Qilu Hospital of Shandong University. Congrats to all coauthors!!!

We have also ranked #2nd in the very competitive BraTS Challenge on Sub-Sahara-Africa Adult Glioma (BraTS-SSA)! Our methodology is so-called “Transferring Knowledge from High-Quality to Low-Quality MRI for Adult Glioma Diagnosis”. Stay tuned for more updates!

Congrats to Team members: Yanguang Zhao, Long Bai, Zhaoxi Zhang, and Prof Hongliang Ren from #LabREN at The Chinese University of Hong Kong, Dr. Yanan Wu from China Medical University, and Dr. Mobarakol Islam from University College London.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

Checkout our latest work published in #ARSO2024, entitled โ€œNavigation of Tendon-driven Flexible Robotic Endoscope through Deep Reinforcement Learningโ€. The work was finished by Chi Kit Ng during his undergraduate study in our #LabREN and he is now continuing his research with us, awarded the scholarship of Theย Hong Kong PhD Fellowship Schemeย (#HKPFS), the only EE awardee in this batch!

Robotic endoscopes play a crucial role in diagnosing gastrointestinal disease and performing tumor resections. While current research primarily focuses on autonomously controlling rigid robots, establishing control models for flexible robots remains challenging. To address this, model-free deep reinforcement learning (DRL) presents a promising approach for enabling agents to make decisions under uncertainty.

In this paper, we investigate the control policy of a flexible endoscope using Simulation Open Framework Architecture (SOFA) platform. We design a flexible tendon-driven robotic endoscope (TDRE) and develop a custom simulation environment within SOFA to train DRL agents. Our approach involves implementing the Proximal Policy Optimization (PPO) algorithm to approximate an optimal policy for trajectory planning. The optimal policy facilitates trajectory tracking tasks for the TDREโ€™s end-effector, such as circle trajectories and action disturbances, without requiring fine-tuning policy network parameters. Experimental results demonstrate that our approach achieves near real-time performance (30 FPS). The feedforward neural network of the policy provides feedback, enabling closed-loop control of TDRE. Furthermore, our experiments show that the navigation success rate of TDRE exceeds 90% within a tolerant error of 3 mm in free space. Notably, compared to direct training with contact, navigation tasks with contact retrained by a pre-trained policy in free space exhibit enhanced navigation capabilities.

Paper link: https://lnkd.in/gDH5xYA5

Co-authors: Chi Kit Ng; Huxin Gao; Tian-Ao Ren; Sam, Jiewen Lai; Hongliang Ren

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽŠ Exciting News!!!๐ŸŽŠ

We will present 4 main conference papers and 4 workshop/challenge papers in #MICCAI2024, Marrakesh, Morocco. These cover interesting topics such as #DDPM, Low-light Image Enhancement, #GaussianSplatting, Depth Reconstruction, Data Robustness, and Medical Image Segmentation. Congrates to all of our awesome collaborators! Do drop by our poster and oral sessions if you are interested in our work!

Main Conference 1: EndoUIC: Promptable Diffusion Transformer for Unified Illumination Correction in Capsule Endoscopy

Long Bai, Oct 08, Poster Session 3, 10:30 – 11:30

Poster ID: T-AM-091

Paper: https://lnkd.in/gV5MzKct

Code: https://lnkd.in/ghYauAGM

Main Conference 2: Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting

YIMING HUANG, Oct 08, Poster Session 4, 15:00 – 16:30 

Post ID: T-PM-074

Paper: https://lnkd.in/gtDwbyWg

Code: https://lnkd.in/ggaDgVxW

Main Conference 3: EndoDAC: Efficient Adapting Foundation Model for Self-Supervised Depth Estimation from Any Endoscopic Camera

Beilei Cui, Oct 08, Poster Session 4, 15:00 – 16:30

Poster ID: T-PM-076

Paper: https://lnkd.in/gQgpFFpq

Code: https://lnkd.in/g_bfk56S

Main Conference 4: LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion

Tong Chen, Oct 09, Oral Session 16, 13:30 – 15:00, Poster Session 6, 15:00 – 16:30

Poster ID: W-PM-154

Paper: https://lnkd.in/gybFHPmu

Code: https://lnkd.in/gmmWXCnd

Workshop 1: A Review of 3D Reconstruction Techniques for Deformable Tissues in Robotic Surgery

Long Bai, Oct 06, Oral & Poster

Embodied AI and Robotics for HealTHcare (EARTH) Workshop

Paper: https://lnkd.in/gD6juYyV

Code: https://lnkd.in/gnwhzrQn

Workshop 2: Benchmarking Robustness of Endoscopic Depth Estimation with Synthetically Corrupted Data

Beilei Cui, Oct 10, Poster, 15:10 – 16:05

9th International Workshop on Simulation and Synthesis in Medical Imaging (SASHIMI)

Paper: https://lnkd.in/gnWtK37B

Code: https://lnkd.in/gjc2FBWT

Challenge 1: Transferring Knowledge from High-Quality to Low-Quality CT for Adult Glioma Diagnosis

Long Bai, Oct 06, Oral (Top-performing Team)

BraTS Challenge on Sub-Sahara-Africa Adult Glioma (BraTS-SSA)

Challenge 2: Ensembling Multi-scale Networks for Accurate Adult Glioma Diagnosis

Long Bai, Oct 06, Poster

BraTS Adult Glioma Post Treatment Challenge (BraTS-GLI)

We are thrilled to share our journal paper titled โ€œPatient-mounted NeuroOCT for Targeted Minimally-invasive Micro-resolution Volumetric Imaging in Brain In Vivoโ€ accepted to Advanced Intelligent Systems! In this paper, we introduce an innovative โœจ wearable neuro optical coherence tomography (neuroOCT) โœจ system, featuring a lightweight hydraulic 5-DoF skullbot combined with a neuroendoscope approximately 0.6 mm in diameter.

This system facilitates targeted, minimally invasive neuroimaging with an axial resolution of about 2.4 ฮผm and a transverse resolution of around 4.5 ฮผm in the deep brain in vivo. The skullbot enables precise deployment of the neuroendoscope with a targeting accuracy of ยฑ1.5 mm transversely and ยฑ0.25 mm longitudinally, confirmed through optical phantom studies. The skullbot can be securely attached to the head, allowing for motion-insensitive stereotactic imaging within the brain.

We validated the system’s capabilities by demonstrating targeted imaging of a tumor in a brain phantom and conducting in vivo micro-resolution volumetric neuroimaging of fine structures within a mouse brain. This advanced device offers in situ disease evaluation at a micro- resolution level and serves as a promising intraoperative imaging tool, complementing existing4 clinical whole-brain imaging modalities such as MRI.

Our findings suggest that the neuroOCT system can significantly advance minimally invasive high-resolution targeted neuroendoscopy, thereby improving patient safety during neurosurgical procedures.

The paper will be available at https://lnkd.in/gFREQFbS

Stay tuned!

This is a collabrative work between CUHK ABI Lab (https://lnkd.in/gUuzQqDt) and REN Lab (http://www.labren.org/mm/).

Congrates to authors: Chao Xu+, Zhiwei Fang+, Huxin Gao, Tinghua Zhang, Tao Zhang, Peng Liu, Hongliang Ren*, and Wu ‘Scott’ YUAN*

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image