๐ŸŽ‰ Check out our #MICCAI2024 accepted paper “EndoUIC: Promptable Diffusion Transformer for Unified Illumination Correction in Capsule Endoscopy”.

In this work, through incorporating a set of learnable parameters to prompt the learning targets, the diffusion model can effectively address the unified illumination correction challenge in capsule endoscopy. We also propose a new capsule endoscopy dataset including underexposed and overexposed images, as well as the ground truth.

Thanks to all of our collaborators from multiple institutions: Long Bai, Qiaozhi Tan, Zhicheng He, Sishen YUAN, Prof. Hongliang Ren from CUHK & SZRI, Tong Chen from USYD, Wan Jun Nah from Universiti Malaya, Yanheng Li from CityU HK, Prof. Zhen CHEN, Prof. Jinlin Wu, Prof. Hongbin Liu from CAIR HK, Dr. Mobarakol Islam from WEISS, UCL, and Dr. Zhen Li from Qilu Hospital of SDU.

Paper: https://lnkd.in/gJaqikqj

Code & Dataset: https://lnkd.in/ghYauAGM

No alternative text description for this image
No alternative text description for this image

๐Ÿ“Š **Empowering Robotic Surgery with SAM 2: An Empirical Study on Surgical Instrument Segmentation** ๐Ÿค–๐Ÿ‘จโ€โš•๏ธ

We’re excited to share our latest research on the Segment Anything Model (SAM) 2. This empirical evaluation uncovers SAM 2’s robustness and generalization capabilities in surgical image/video segmentation, a critical component for enhancing precision and safety in the operating room.

๐Ÿ”ฌ **Key Findings**:

– In general, SAM 2 outperforms its predecessor in instrument segmentation, showing a much improved zero-shot generalization capability to the surgical domain.

– Utilizing bounding box prompts, SAM 2 achieves remarkable results, setting a new benchmark in the surgical image segmentation.

– With a single point as the prompt on the first frame, SAM 2 demonstrates substantial improvements on video segmentation over SAM, which requires point prompts on every frames. This suggests great potential in addressing video-based surgical tasks. 

– Resilience Under Common Corruptions: SAM 2 shows impressive robustness against real-world image corruption, maintaining performance under various challenges such as compression, noise, and blur.

๐Ÿ”ง **Practical Implications**:

– With faster inference speeds, SAM 2 is poised to provide quick, accurate segmentation, making it a valuable asset in the clinical setting.

๐Ÿ”— **Learn More**:

For those interested in the technical depth, our paper is available on [arXiv](https://lnkd.in/gHfdrvj3).

We’re eager to engage with the community and explore how SAM 2 can revolutionize surgical applications.

Thanks to the team contributions of Jieming YU, An Wang, Wenzhen Dong, Mengya Xu, Jie Wang, Long Bai, Hongliang Ren from Department of Electronic Engineering, The Chinese University of Hong Kong and Shenzhen Research Institute of CUHK, and Mobarakol Islam from WEISS – Wellcome / EPSRC Centre for Interventional and Surgical Sciences, UCL.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ› ๏ธ Introducing CAT-SD: Privacy-Centric AI in Robotic Surgery ๐Ÿค–

In our recent work, “Privacy-Preserving Synthetic Continual Semantic Segmentation for Robotic Surgery”, which was published in IEEE Transactions on Medical Imaging, we propose a state-of-the-art framework for continual semantic segmentation in robotic surgery. This breakthrough addresses catastrophic forgetting in DNNs, enhancing surgical precision without compromising patient privacy.

๐Ÿ”’ Privacy-First Synthetic Data: We’ve crafted a solution that blends open-source instrument data with synthesized backgrounds, ensuring real patient data remains confidential.

๐Ÿ’ก Innovative Features:

– Class-Aware Temperature Normalization (CAT) to prevent forgetting of previously learned tasks.

– Multi-Scale Shifted-Feature Distillation (SD) to preserve spatial relationships for robust feature learning.

Check the paper at https://lnkd.in/eTy8KAC5

Code is also available at https://lnkd.in/eMzNs2Be

Co-authors: Mengya Xu, Mobarakol Islam, Long Bai, Hongliang Ren

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ“ข CFP due July 20: ๐ˆ๐‚๐๐ˆ๐‘ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐€๐ฎ๐ ๐ฎ๐ฌ๐ญ ๐Ÿ๐Ÿ”โ€“๐Ÿ๐Ÿ–, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, ๐†๐š๐ง๐ฌ๐ฎ, ๐‚๐ก๐ข๐ง๐š

๐Ÿ“ข ๐‚๐š๐ฅ๐ฅ ๐Ÿ๐จ๐ซ ๐๐š๐ฉ๐ž๐ซ๐ฌ: ๐ˆ๐‚๐๐ˆ๐‘ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐€๐ฎ๐ ๐ฎ๐ฌ๐ญ ๐Ÿ๐Ÿ”โ€“๐Ÿ๐Ÿ–, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, ๐†๐š๐ง๐ฌ๐ฎ, ๐‚๐ก๐ข๐ง๐š

Join us in the breathtaking landscapes of ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, China for the ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐‚๐จ๐ง๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐จ๐ง ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (๐ˆ๐‚๐๐ˆ๐‘), an affiliated event of the ๐๐Ÿ Elsevier journal ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (IF 5.4).

We welcome original contributions covering:
โ€ข Biomimetic design, materials & actuation
โ€ข Bio-inspired sensing, perception & navigation
โ€ข Learning-based control & embodied AI
โ€ข Soft & adaptive robotics
โ€ข Novel real-world applications integrating theory and practice

All accepted papers will be published by Elsevier and indexed in EI & Scopus. Top-ranked submissions will earn best-paper awards and invitations to submit expanded versions to Biomimetic Intelligence and Robotics and other leading journals.

๐Š๐ž๐ฒ ๐ƒ๐š๐ญ๐ž๐ฌ 

โ€ข Full-Paper (or Short Abstract) submissions due โ†’ July 20, 2025 

โ€ข Acceptance notifications โ†’ August 1, 2025 

โ€ข Registration & final manuscript โ†’ August 10, 2025

Learn more & submit at โ–ถ๏ธ http://www.icbir.org

4 papers accepted at MICCAI2024

Congratulations to the following members (Long, Beilei, Yiming) for the papers accepted and to be presented by MICCAI2024 (accepted 858 out of 2771 papers this year, reaching an acceptance rate of 31%):

  1. Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting
  2. EndoUIC: Promptable Diffusion Transformer for Unified Illumination Correction in Capsule Endoscopy
  3. LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion
  4. EndoDAC: Efficient Adapting Foundation Model for Self-Supervised Depth Estimation from Any Endoscopic Camera

Team presentation and award at IPCAI2024

Congratulations to Beilei and Mobarakl for the paper “Surgical-DINO: Adapter Learning of Foundation Models for Depth Estimation in Endoscopic Surgery, (Beilei Cui, Mobarakol Islam, Long Bai, Hongliang Ren) Shortlisted for competing for the IPCAI2024 Best Paper Award and long presentation at  IPCAI2024 Barcelona, Spain.

The International Conference on Information Processing in Computer-Assisted Interventions (IPCAI) is one of the most important venues for disseminating innovative peer-reviewed research in computer-assisted surgery and minimally invasive interventions. Now in its 15th year, IPCAI is an interdisciplinary conference that attracts clinicians, engineers and computer science researchers from various backgrounds, including machine learning, robotics, computer vision, medical imaging, data science, and sensing technologies. IPCAI fosters connections and showcases high-quality research in a unique and focused two-day event. The conference is formatted specifically to actively engage the attendees.

The IPCAI was held between June 18-19 2024 in conjunction with the Computer-Assisted Radiology and Surgery (CARS) Conference in Barcelona, Spain.

 

 
 
 
 

Dr Ren Won Young Researcher Award, CUHK

Dr. Ren won the Young Researcher Award 2024, CUHK, Congratulations!

 

The Young Researcher Award is nominated annually by each Faculty to recognize young
academic staff with exemplary research achievements. The Award consists of a plaque and
an amount of HK$ I 00,000 in the form of a research grant.