In this work, through incorporating a set of learnable parameters to prompt the learning targets, the diffusion model can effectively address the unified illumination correction challenge in capsule endoscopy. We also propose a new capsule endoscopy dataset including underexposed and overexposed images, as well as the ground truth.
Thanks to all of our collaborators from multiple institutions: Long Bai, Qiaozhi Tan, Zhicheng He, Sishen YUAN, Prof. Hongliang Ren from CUHK & SZRI, Tong Chen from USYD, Wan Jun Nah from Universiti Malaya, Yanheng Li from CityU HK, Prof. Zhen CHEN, Prof. Jinlin Wu, Prof. Hongbin Liu from CAIR HK, Dr. Mobarakol Islam from WEISS, UCL, and Dr. Zhen Li from Qilu Hospital of SDU.
We’re excited to share our latest research on the Segment Anything Model (SAM) 2. This empirical evaluation uncovers SAM 2’s robustness and generalization capabilities in surgical image/video segmentation, a critical component for enhancing precision and safety in the operating room.
๐ฌ **Key Findings**:
– In general, SAM 2 outperforms its predecessor in instrument segmentation, showing a much improved zero-shot generalization capability to the surgical domain.
– Utilizing bounding box prompts, SAM 2 achieves remarkable results, setting a new benchmark in the surgical image segmentation.
– With a single point as the prompt on the first frame, SAM 2 demonstrates substantial improvements on video segmentation over SAM, which requires point prompts on every frames. This suggests great potential in addressing video-based surgical tasks.
– Resilience Under Common Corruptions: SAM 2 shows impressive robustness against real-world image corruption, maintaining performance under various challenges such as compression, noise, and blur.
๐ง **Practical Implications**:
– With faster inference speeds, SAM 2 is poised to provide quick, accurate segmentation, making it a valuable asset in the clinical setting.
๐ **Learn More**:
For those interested in the technical depth, our paper is available on [arXiv](https://lnkd.in/gHfdrvj3).
We’re eager to engage with the community and explore how SAM 2 can revolutionize surgical applications.
In our recent work, “Privacy-Preserving Synthetic Continual Semantic Segmentation for Robotic Surgery”, which was published in IEEE Transactions on Medical Imaging, we propose a state-of-the-art framework for continual semantic segmentation in robotic surgery. This breakthrough addresses catastrophic forgetting in DNNs, enhancing surgical precision without compromising patient privacy.
๐ Privacy-First Synthetic Data: We’ve crafted a solution that blends open-source instrument data with synthesized backgrounds, ensuring real patient data remains confidential.
๐ก Innovative Features:
– Class-Aware Temperature Normalization (CAT) to prevent forgetting of previously learned tasks.
– Multi-Scale Shifted-Feature Distillation (SD) to preserve spatial relationships for robust feature learning.
Check the paper at https://lnkd.in/eTy8KAC5
Code is also available at https://lnkd.in/eMzNs2Be