๐ŸŒŸ Exciting News! Our latest research work, entitled “RASEC: Rescaling Acquisition Strategy With Energy Constraints Under Fusion Kernel for Active Incision Recommendation in Tracheotomy”, has been accepted by IEEE Transactions on Automation Science and Engineering (T-ASE).

๐Ÿ” In this paper, we unveil an innovative autonomous palpation-based acquisition strategy – RASEC, designed for the tracheal region. RASEC predicts the next acquisition point interactively, maximizing expected information and minimizing palpation procedure costs. By leveraging a Gaussian Process (GP) to model tissue hardness distribution and anatomical information as a guiding input for medical robots, RASEC revolutionizes robot-assisted subtasks in tracheotomy.

๐Ÿ’ก We introduce a dynamic tactile sensor based on resonant frequency to measure tissue hardness at millimeter-scale precision, ensuring secure interactions. By exploring kernel fusion techniques blending Squared Exponential (SE) and Ornstein-Uhlenbeck (OU) kernels, and optimizing Bayesian search with larynx anatomical data, we enhance exploration efficiency and accuracy.

๐Ÿ”ฌ Our research considers new factors like tactile sensor movement and robotic base rotation in the acquisition strategy. Simulation and physical phantom experiments demonstrate a remarkable 53.1% reduction in sensor movement and 75.2% reduction in base rotation, with superior algorithmic performance metrics (average precision 0.932, average recall 0.973, average F1 score 0.952) and minimal distance errors (0.423 mm) at a high resolution of 1 mm.

๐Ÿš€ The results showcase RASEC’s excellence in exploration efficiency, cost-effectiveness, and incision localization accuracy in real robot-assisted tracheotomy procedures.

This collaborative work is achieved by WENCHAO YUE, Fan Bai, Jianbang Liu, and Prof Hongliang Ren from The Chinese University of Hong Kong, Prof Feng Ju from Nanjing University of Aeronautics and Astronautics, Prof Max Q.-H. Meng from Southern University of Science and Technology, and Dr. Chwee Ming Lim from Singapore General Hospital.

Paper is available at https://lnkd.in/gEgmaDVj

๐ŸŽ‰ Thrilled to unveil our latest breakthrough! ๐ŸŒŸ Our paper, “Dual-Stroke Soft Peltier Pouch Motor Based on Pipeless Thermo-Pneumatic Actuation” collaborated by WENCHAO YUE and Chengxi Bai, has been published in Advanced Engineering Materials (cover invitation)!

๐Ÿ’ก Soft pneumatic actuators are at the heart of soft robotics, offering reliability, safety, and flexibility. However, conventional bulky air compressors and pipes have limited their integration and lightweight design. Enter the Peltier pouch motor (PPM), a cutting-edge soft thermoelectric-based actuator that redefines possibilities in the field.

๐Ÿ” The PPM introduces modular and dual-stroke capabilities through active phase transition of a low-boiling-point liquid, enabling pipeless thermo-pneumatic actuation. Its lightweight and stretchable design fosters hyper-modularity, paving the way for diverse degrees-of-freedom hybrid systems.

๐Ÿš€ From thermo-responsive land locomotion to submersible noise-free hovering and beyond, the PPM excels in various applications, including smart curtains control, body-temperature-driven wrist rehabilitation, and adaptive hybrid gripping. Our results showcase exceptional performance metrics, highlighting high load rates (around 400%), remarkable heat transfer efficiency (heating boost 425%, cooling boost 138%), and rapid thermal response (heating 0.57ยฐโ€‰sโˆ’1, cooling 0.29ยฐโ€‰sโˆ’1 at 4.5โ€‰V).

Paper link: https://lnkd.in/dPbyXHQd

Co-authors: WENCHAO YUE, Chengxi Bai, Prof Sam, Jiewen Lai, and Prof Hongliang Ren.

๐Ÿ”ฅ Join us on this groundbreaking journey as we push the boundaries of soft robotics with the innovative Peltier pouch motor! ๐Ÿค–โœจ

No alternative text description for this image

We are thrilled to share our work “Lightweight Pneumatically Elastic Backbone Structure with Modular Construction and Nonlinear Interaction for Soft Actuators” published on Soft Robotics ๐ŸŽ‰๐ŸŽ‰

The paper is available at https://lnkd.in/gfS5fGSw

The research is the result of a remarkable collaboration between Yang Yang from CUHK and ZJU, Sam, Jiewen Lai from CUHK, Chaochao Xu from NUS, Zhiguo He and Pengcheng Jiao from ZJU, and Hongliang Ren from CUHK and NUS.

๐Ÿ‘‡ Open the below article for more details!!!

There has been a growing need for soft robots operating various force-sensitive tasks due to their environmental adaptability, satisfactory controllability, and nonlinear mobility unique from rigid robots. It is of desire to further study the system instability and strongly nonlinear interaction phenomenon that are the main influence factors to the actuations of lightweight soft actuators.

Here, we present a design principle on lightweight pneumatically elastic backbone structure (PEBS) with the modular construction for soft actuators, which contains a backbone printed as one piece and a common strip balloon. We build a prototype of a lightweight (<80 g) soft actuator, which can perform bending motions with satisfactory output forces (~ 20 times self-weight).

Experiments are conducted on the bending effects generated by interactions between the hyper-elastic inner balloon and the elastic backbone. We investigated the nonlinear interaction and system instability experimentally, numerically and parametrically. To overcome them, we further derived a theoretical nonlinear model and a numerical model. Satisfactory agreements are obtained between the numerical, theoretical and experimental results. The accuracy of the numerical model is fully validated. Parametric studies are conducted on the backbone geometry and stiffness, balloon stiffness, thickness, and diameter. The accurate controllability, operation safety, modularization ability, and collaborative ability of the PEBS are validated by designing PEBS into a soft laryngoscope, a modularized PEBS library for a robotic arm, and a PEBS system that can operate remote surgery. The reported work provides a further applicability potential of soft robotics studies.

FIG. 1. Illustrative demonstration of the PEBS: (a) the detailed structure and dimensions of the PEBS, (b) the design principle that can be divided into the separation stage and the interaction stage, and (c) the design principle that can be specifically divided into the separation stage, the insufficient interaction stage, the full interaction stage, and the excessive interaction stage, based on the interaction conditions.

FIG. 2. Nonlinear interaction phenomenon analyses: (a) the free oscillation phenomenon of the backbone structure generated by the structural asymmetric stress responses to gravity, (b) the interaction performances of the backbone structure and the density plot showing the relationship between the gap numbers, pressures, and bending angles, (c) the interaction performances of the balloon and the relationships between the pressures and expansion ratios regarding the radial and axial expansion ratios, respectively, (d) relationships between the pressures and stresses regarding the backbone structure and balloon, respectively.

FIG. 3. Applications of PEBS demonstrate unique advantages of accurate controllability, operation safety, modularization ability, and collaborative ability. (a) A PEBS soft laryngoscope that can operate laryngeal diagnosis. (b) Real-time images captured by the integrated image sensor. (c) Modularized PEBS library that can be installed onto a robot arm. (d) A PEBS grasper that can operate various grasping tasks. (e) The PEBS system can be potentially applied to operate a debridement.

Our paper, “Curriculum-Based Augmented Fourier Domain Adaptation for Robust Medical Image Segmentation”, has been just been published on IEEE Transactions on Automation Science and Engineering!!!

Our work proposes the Curriculum-based Augmented Fourier Domain Adaptation (Curri-AFDA) and proves to achieve superior adaptation, generalization, and robustness performance for medical image segmentation.

**Motivation**

Medical image segmentation is key to improving computer-assisted diagnosis and intervention autonomy. However, due to domain gaps between different medical sites, deep learning-based segmentation models frequently encounter performance degradation when deployed in a novel domain. Moreover, model robustness is also highly expected to mitigate the effects of data corruption.

**Methodology**

Considering all these demanding yet practical needs to automate medical applications and benefit healthcare, we propose the Curriculum-based Fourier Domain Adaptation (Curri-AFDA) for medical image segmentation. Specifically, we design a novel curriculum strategy to progressively transfer amplitude information in the Fourier space from the target domain to the source domain to mitigate domain gaps and incorporate the chained augmentation mixing to further improve the generalization and robustness ability.

**Performance**

Extensive experiments on two segmentation tasks with cross-domain datasets show the consistent superiority of our method regarding adaptation and generalization on multiple testing domains and robustness against synthetic corrupted data. Besides, our approach is independent of image modalities because its efficacy does not rely on modality-specific characteristics. In addition, we demonstrate the benefit of our method for image classification besides segmentation in the ablation study. Therefore, our method can potentially be applied in various medical applications and yield improved performance.

Paper is available at https://lnkd.in/guYe5SBA

Code is released at https://lnkd.in/gfFAfSQX

Thanks to all collaborators, including An Wang, Mengya Xu, and Prof. Hongliang Ren from CUHK, and Dr. Mobarakol Islam from WEISS, UCL.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰Our recent work “Surgical-VQLA++: Adversarial Contrastive Learning for Calibrated Robust Visual Question-Localized Answering in Robotic Surgery” has been accepted by Information Fusion!

This paper is an extended version of our #ICRA2023 Surgical-VQLA. Our method can serve as an effective and reliable tool to assist in surgical education and clinical decision-making by providing more insightful analyses of surgical scenes.

โœจ Key Contributions in the journal version:

– A dual calibration module is proposed to align and normalize multimodal representations. 

– A contrastive training strategy with adversarial examples is employed to enhance robustness.

– Various optimization function is widely explored.

– The EndoVis-18-VQLA & EndoVis-17-VQLA datasets are further extended.

– Our proposed solution presents superior performance and robustness against real-world image corruption.

Conference Version (ICRA 2023): https://lnkd.in/gHscT3eN

Journal Version (Information Fusion): https://lnkd.in/gQNWwHmt

Code & Dataset: https://lnkd.in/g7CTuyAH

Thank all of the collaborators for their effort: Long Bai, Guankun Wang, An Wang, and Prof. Hongliang Ren from CUHK, Dr. Mobarakol Islam from WEISS, UCL, and Dr. Lalithkumar Seenivasan from JHU.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Check out our #MICCAI2024 accepted paper “EndoUIC: Promptable Diffusion Transformer for Unified Illumination Correction in Capsule Endoscopy”.

In this work, through incorporating a set of learnable parameters to prompt the learning targets, the diffusion model can effectively address the unified illumination correction challenge in capsule endoscopy. We also propose a new capsule endoscopy dataset including underexposed and overexposed images, as well as the ground truth.

Thanks to all of our collaborators from multiple institutions: Long Bai, Qiaozhi Tan, Zhicheng He, Sishen YUAN, Prof. Hongliang Ren from CUHK & SZRI, Tong Chen from USYD, Wan Jun Nah from Universiti Malaya, Yanheng Li from CityU HK, Prof. Zhen CHEN, Prof. Jinlin Wu, Prof. Hongbin Liu from CAIR HK, Dr. Mobarakol Islam from WEISS, UCL, and Dr. Zhen Li from Qilu Hospital of SDU.

Paper: https://lnkd.in/gJaqikqj

Code & Dataset: https://lnkd.in/ghYauAGM

No alternative text description for this image
No alternative text description for this image

๐Ÿ“Š **Empowering Robotic Surgery with SAM 2: An Empirical Study on Surgical Instrument Segmentation** ๐Ÿค–๐Ÿ‘จโ€โš•๏ธ

We’re excited to share our latest research on the Segment Anything Model (SAM) 2. This empirical evaluation uncovers SAM 2’s robustness and generalization capabilities in surgical image/video segmentation, a critical component for enhancing precision and safety in the operating room.

๐Ÿ”ฌ **Key Findings**:

– In general, SAM 2 outperforms its predecessor in instrument segmentation, showing a much improved zero-shot generalization capability to the surgical domain.

– Utilizing bounding box prompts, SAM 2 achieves remarkable results, setting a new benchmark in the surgical image segmentation.

– With a single point as the prompt on the first frame, SAM 2 demonstrates substantial improvements on video segmentation over SAM, which requires point prompts on every frames. This suggests great potential in addressing video-based surgical tasks. 

– Resilience Under Common Corruptions: SAM 2 shows impressive robustness against real-world image corruption, maintaining performance under various challenges such as compression, noise, and blur.

๐Ÿ”ง **Practical Implications**:

– With faster inference speeds, SAM 2 is poised to provide quick, accurate segmentation, making it a valuable asset in the clinical setting.

๐Ÿ”— **Learn More**:

For those interested in the technical depth, our paper is available on [arXiv](https://lnkd.in/gHfdrvj3).

We’re eager to engage with the community and explore how SAM 2 can revolutionize surgical applications.

Thanks to the team contributions of Jieming YU, An Wang, Wenzhen Dong, Mengya Xu, Jie Wang, Long Bai, Hongliang Ren from Department of Electronic Engineering, The Chinese University of Hong Kong and Shenzhen Research Institute of CUHK, and Mobarakol Islam from WEISS – Wellcome / EPSRC Centre for Interventional and Surgical Sciences, UCL.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image