**Exploring Origami Crawlers: Unleashing the Potential of Confined Space Robotics**

We are thrilled to share a new research paper that’s just been published in *Communications Engineering*, showcasing a remarkable advancement in the field of origami-inspired tiny robots. ๐Ÿ“ฐโœจ

**๐Ÿ”— [Untethered Bistable Origami Crawler for Confined Applications](https://lnkd.in/gpJHEM-q)

### What’s the Buzz About?

This research introduces a **magnetically actuated bistable origami crawler**, a miniature robot designed to navigate and perform tasks in confined spaces which are challenging for traditional tethered or wired devices. ๐Ÿš€

### Key Highlights:

– **Shape-Morphing Capability**: The crawler can transform between an undeployed locomotion state and a deployed load-bearing state, thanks to its bistable design. ๐Ÿ”ง

– **Robust Locomotion**: Utilizes out-of-plane crawling for bi-directional locomotion and navigation, exhibiting robust navigation even in high-friction environments. ๐Ÿ›ค๏ธ

– **Load-Bearing Applications**: The deployed state allows the crawler to execute tasks like microneedle insertion, opening up possibilities for medical interventions. ๐Ÿฉบ

– **Untethered Operation**: Equipped with internal permanent magnets, this crawler operates without the need for external tethers, enhancing its maneuverability and miniaturizability. ๐Ÿช

### Why It Matters:

This technology could provide an alternative approach to solve problems encountered in confined environments, from medical procedures in the gastrointestinal tract to complex engineering tasks in tight spots. ๐ŸŒ

### What’s Next:

The concept proposed in this work can also be adapted and applied to a variety of other deployable and load-bearing applications, such as fluid collection, stents, and airway support mechanisms. The proposed mechanism design can also be potentially integrated with other actuation methods like pneumatic systems for larger scale applications. ๐ŸŒ 

### Join the Conversation:

This work is a collaborative effort between Dr. Catherine Cai from National University of Singapore, Dr. Hui Huang from A*STAR – Agency for Science, Technology and Research and Prof. Hongliang Ren from The Chinese University of Hong Kong.

We’re eager to hear your thoughts on what we hope is an innovative research! How do you envision this technology being used in your field? Share your ideas and let’s discuss the future of robotics and origami engineering! ๐Ÿค–๐Ÿ“š

*Don’t forget to check out the full paper for a deep dive into the mechanics, applications, and implications of this incredible new technology. It’s a must-read for anyone interested in the cutting edge of robotics and engineering innovation!* ๐Ÿ“–๐Ÿ’ก

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

Excited to share our journal paper entitled โ€œMagnetic Tracking With Real-Time Geomagnetic Vector Separation for Robotic Dockable Chargingโ€ published in IEEE Transactions on Intelligent Transportation Systems! ๐ŸŽ‰

Great collaboration between the Chinese University of Hong Kong and the Quanzhou Institute of Equipment Manufacturing, Haixi Institutes, Chinese Academy of Sciences. ๐Ÿค

The superposition of the geomagnetic vector and the magnetic field vector generated by the permanent magnet (PM) leads to the degrading of magnetic tracking performance. Here we present a real-time geomagnetic-vector-separation method to estimate the PM pose and geomagnetic vector simultaneously. This advancement promises to revolutionize autonomous robotic operations, offering a robust solution for seamless and reliable self-charging mechanisms, with far-reaching implications for various industries.

Paper: https://lnkd.in/gAbr82Dp

Authors: Shijian Su; Houde Dai; Yuanchao Zhang; Sishen YUAN; Prof Shuang Song and Prof Hongliang Ren.

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰Our recent work “Surgical-VQLA++: Adversarial Contrastive Learning for Calibrated Robust Visual Question-Localized Answering in Robotic Surgery” has been accepted by Information Fusion!

This paper is an extended version of our #ICRA2023 Surgical-VQLA. Our method can serve as an effective and reliable tool to assist in surgical education and clinical decision-making by providing more insightful analyses of surgical scenes.

โœจ Key Contributions in the journal version:

– A dual calibration module is proposed to align and normalize multimodal representations. 

– A contrastive training strategy with adversarial examples is employed to enhance robustness.

– Various optimization function is widely explored.

– The EndoVis-18-VQLA & EndoVis-17-VQLA datasets are further extended.

– Our proposed solution presents superior performance and robustness against real-world image corruption.

Conference Version (ICRA 2023): https://lnkd.in/gHscT3eN

Journal Version (Information Fusion): https://lnkd.in/gQNWwHmt

Code & Dataset: https://lnkd.in/g7CTuyAH

Thank all of the collaborators for their effort: Long Bai, Guankun Wang, An Wang, and Prof. Hongliang Ren from CUHK, Dr. Mobarakol Islam from WEISS, UCL, and Dr. Lalithkumar Seenivasan from JHU.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿ“ข CFP due July 20: ๐ˆ๐‚๐๐ˆ๐‘ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐€๐ฎ๐ ๐ฎ๐ฌ๐ญ ๐Ÿ๐Ÿ”โ€“๐Ÿ๐Ÿ–, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, ๐†๐š๐ง๐ฌ๐ฎ, ๐‚๐ก๐ข๐ง๐š

๐Ÿ“ข ๐‚๐š๐ฅ๐ฅ ๐Ÿ๐จ๐ซ ๐๐š๐ฉ๐ž๐ซ๐ฌ: ๐ˆ๐‚๐๐ˆ๐‘ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐€๐ฎ๐ ๐ฎ๐ฌ๐ญ ๐Ÿ๐Ÿ”โ€“๐Ÿ๐Ÿ–, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, ๐†๐š๐ง๐ฌ๐ฎ, ๐‚๐ก๐ข๐ง๐š

Join us in the breathtaking landscapes of ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, China for the ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐‚๐จ๐ง๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐จ๐ง ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (๐ˆ๐‚๐๐ˆ๐‘), an affiliated event of the ๐๐Ÿ Elsevier journal ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (IF 5.4).

We welcome original contributions covering:
โ€ข Biomimetic design, materials & actuation
โ€ข Bio-inspired sensing, perception & navigation
โ€ข Learning-based control & embodied AI
โ€ข Soft & adaptive robotics
โ€ข Novel real-world applications integrating theory and practice

All accepted papers will be published by Elsevier and indexed in EI & Scopus. Top-ranked submissions will earn best-paper awards and invitations to submit expanded versions to Biomimetic Intelligence and Robotics and other leading journals.

๐Š๐ž๐ฒ ๐ƒ๐š๐ญ๐ž๐ฌ 

โ€ข Full-Paper (or Short Abstract) submissions due โ†’ July 20, 2025 

โ€ข Acceptance notifications โ†’ August 1, 2025 

โ€ข Registration & final manuscript โ†’ August 10, 2025

Learn more & submit at โ–ถ๏ธ http://www.icbir.org

4 papers accepted at MICCAI2024

Congratulations to the following members (Long, Beilei, Yiming) for the papers accepted and to be presented by MICCAI2024 (accepted 858 out of 2771 papers this year, reaching an acceptance rate of 31%):

  1. Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting
  2. EndoUIC: Promptable Diffusion Transformer for Unified Illumination Correction in Capsule Endoscopy
  3. LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion
  4. EndoDAC: Efficient Adapting Foundation Model for Self-Supervised Depth Estimation from Any Endoscopic Camera

Team presentation and award at IPCAI2024

Congratulations to Beilei and Mobarakl for the paper “Surgical-DINO: Adapter Learning of Foundation Models for Depth Estimation in Endoscopic Surgery, (Beilei Cui, Mobarakol Islam, Long Bai, Hongliang Ren) Shortlisted for competing for the IPCAI2024 Best Paper Award and long presentation at  IPCAI2024 Barcelona, Spain.

The International Conference on Information Processing in Computer-Assisted Interventions (IPCAI) is one of the most important venues for disseminating innovative peer-reviewed research in computer-assisted surgery and minimally invasive interventions. Now in its 15th year, IPCAI is an interdisciplinary conference that attracts clinicians, engineers and computer science researchers from various backgrounds, including machine learning, robotics, computer vision, medical imaging, data science, and sensing technologies. IPCAI fosters connections and showcases high-quality research in a unique and focused two-day event. The conference is formatted specifically to actively engage the attendees.

The IPCAI was held between June 18-19 2024 in conjunction with the Computer-Assisted Radiology and Surgery (CARS) Conference in Barcelona, Spain.

 

 
 
 
 

Team presentations and award at ICRA2024

Congratulations to the following members for the papers accepted by ICRA2024 Japan and presentations at  ICRA2024:

  1. Chained Flexible Capsule Endoscope: Unraveling the Conundrum of Size Limitations and Functional Integration for Gastrointestinal Transitivity
  2. Magnetic-Guided Flexible Origami Robot Toward Long-Term Phototherapy of H. Pylori in the Stomach
  3. Inconstant Curvature Kinematics of Parallel Continuum Robot without Static Model
  4. OSSAR: Towards Open-Set Surgical Activity Recognition in Robot-Assisted Surgery
 
Congratulations on the excellent work done and poster award in ICRA workshop in Surgical Robotics:
 
  •  AI-Enhanced Robotic Microendoscope for Optical Coherence Tomography Imaging

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Team presentations at IJCAI 2023

Congratulations to the following members for the presentations atย  IJCAI2023 Macau, IJCAI2023 Symposium on Multimodal Reasoning – Techniques, Applications, and Challenges:

Multimodal Reasoning and Language Prompting for Interactive Learning Assistant for Robotic Surgery. Gokul Kannan, Lalithkumar Seenivasan, Hongliang Ren
IJCAI-2023 Symposium Session on Medical Large Models
Talk by Dr. Ren: Surgical motion understanding and generation towards intelligent minimally invasive robotic procedures

6 papers presented at MICCAI2023

ย 

ย 

ย 

ย 

ย 

Congratulations to the following members for the papers accepted and presented by MICCAI2023:

Revisiting distillation for continual learning on visual question localized-answering in robotic surgery

ย 

L Bai, M Islam, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

Rectifying noisy labels with sequential prior: multi-scale temporal feature affinity learning for robust video segmentation

ย 

B Cui, M Zhang, M Xu, A Wang, W Yuan*, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

Co-attention gated vision-language embedding for visual question localized-answering in robotic surgery

ย 

L Bai, M Islam, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

LLCaps: learning to illuminate low-light capsule endoscopy with curved wavelet attention and reverse diffusion

ย 

L Bai, T Chen, Y Wu, A Wang, M Islam, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

S2ME: Spatial-Spectral Mutual Teaching and Ensemble Learning for Scribble-supervised Polyp Segmentation

ย 

A Wang, M Xu, Y Zhang, M Islam, H Ren*

arXiv preprint arXiv:2306.00451, MICCAI2023

SurgicalGPT: End-to-End Language-Vision GPT for visual question answering in surgery

ย 

L Seenivasan, M Islam, G Kannan, H Ren*

arXiv preprint arXiv:2304.09974, MICCAI2023

Also congrats on Lalith’s award for best MICCAI reviewer & Andy’s workshop presentation