Team presentations and award at ICRA2024

Congratulations to the following members for the papers accepted by ICRA2024 Japan and presentations at  ICRA2024:

  1. Chained Flexible Capsule Endoscope: Unraveling the Conundrum of Size Limitations and Functional Integration for Gastrointestinal Transitivity
  2. Magnetic-Guided Flexible Origami Robot Toward Long-Term Phototherapy of H. Pylori in the Stomach
  3. Inconstant Curvature Kinematics of Parallel Continuum Robot without Static Model
  4. OSSAR: Towards Open-Set Surgical Activity Recognition in Robot-Assisted Surgery
 
Congratulations on the excellent work done and poster award in ICRA workshop in Surgical Robotics:
 
  •  AI-Enhanced Robotic Microendoscope for Optical Coherence Tomography Imaging

 

 
 
 
 
 

Team presentations at IJCAI 2023

Congratulations to the following members for the presentations at  IJCAI2023 Macau, IJCAI2023 Symposium on Multimodal Reasoning – Techniques, Applications, and Challenges:

Multimodal Reasoning and Language Prompting for Interactive Learning Assistant for Robotic Surgery. Gokul Kannan, Lalithkumar Seenivasan, Hongliang Ren
IJCAI-2023 Symposium Session on Medical Large Models
Talk by Dr. Ren: Surgical motion understanding and generation towards intelligent minimally invasive robotic procedures

6 papers presented at MICCAI2023

 

 

 

 

 

Congratulations to the following members for the papers accepted and presented by MICCAI2023:

Revisiting distillation for continual learning on visual question localized-answering in robotic surgery

 

L Bai, M Islam, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

Rectifying noisy labels with sequential prior: multi-scale temporal feature affinity learning for robust video segmentation

 

B Cui, M Zhang, M Xu, A Wang, W Yuan*, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

Co-attention gated vision-language embedding for visual question localized-answering in robotic surgery

 

L Bai, M Islam, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

LLCaps: learning to illuminate low-light capsule endoscopy with curved wavelet attention and reverse diffusion

 

L Bai, T Chen, Y Wu, A Wang, M Islam, H Ren*

Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023

S2ME: Spatial-Spectral Mutual Teaching and Ensemble Learning for Scribble-supervised Polyp Segmentation

 

A Wang, M Xu, Y Zhang, M Islam, H Ren*

arXiv preprint arXiv:2306.00451, MICCAI2023

SurgicalGPT: End-to-End Language-Vision GPT for visual question answering in surgery

 

L Seenivasan, M Islam, G Kannan, H Ren*

arXiv preprint arXiv:2304.09974, MICCAI2023

Also congrats on Lalith’s award for best MICCAI reviewer & Andy’s workshop presentation

Team presentations and award at ICRA2023

Congratulations to the following members for the papers accepted by ICRA2023 and presentations at  ICRA2023:

Surgical-VQLA: transformer with gated vision-language embedding for visual question localized-answering in robotic surgery

L Bai, M Islam, L Seenivasan, H Ren*

arXiv preprint arXiv:2305.11692, ICRA 2023

Rethinking Feature Extraction: Gradient-Based Localized Feature Extraction for End-To-End Surgical Downstream Tasks

W Pang, M Islam, S Mitheran, L Seenivasan, M Xu, H Ren
IEEE Robotics and Automation Letters 7 (4), 12623-12630
Congratulations on the excellent work done and poster award in ICRA workshop New Evolutions in Surgical Robotics: Embracing Multimodal Imaging Guidance, Intelligence, and Bio-inspired Mechanisms held at ICRA 2023.
  • Domain Adaptive Sim-to-Real Segmentation of Oropharyngeal Organs Towards Robot-assisted Intubation, Guankun Wang, Tian-Ao Ren, Jiewen Lai, Long Bai, and Hongliang Ren
  • Relational Reasoning in VQA Models for Visual Question Answering in Robotic Surgery, Lalithkumar and Hongliang Ren

3 papers accepted by MICCAI2022

Congratulations to the following members for the papers accepted by MICCAI2022:

  • An et al Rethinking Surgical Instrument Segmentation: A Background Image Can Be All You Need
  • Lalith et al Surgical-VQA: Visual Question Answering in Surgical Scenes using Transformer
  • Mengya et al Rethinking Surgical Captioning: End-to-End Window-Based MLP Transformer Using Patches

 

Team presentations at ICRA2022

Congratulations to the following members for the 3 papers accepted by ICRA2022 and presentations at  ICRA2022:

Godwin: Chip-Less Real-Time Wireless Sensing of Endotracheal Intubation Tubes by Printing and Mounting Conformable Antenna Tag 

Lalith: Global-Reasoned Multi-Task Learning Model for Surgical Scene Understanding

Ling: SIRNet: Fine-Grained Surgical Interaction Recognition

Godwin and Lalith are presenting at ICRA on site this year:
https://www.icra2022.org/

Team presentations at pre-ICRA2021

Congratulations to the following members for the papers accepted by ICRA2021 and presentations at pre-ICRA2021:

Menya Learning Domain Adaptation with Model Calibration for Surgical Report Generation in Robotic Surgery https://youtu.be/Eu-ryJ9OyTM
Godwin Chip-Less Wireless Sensing of Origami Structural Morphing under Various Mechanical Stimuli Using Home-Based Ink-Jet Printable Materials https://youtu.be/HXgW2S21OGI
Huxin Remote-Center-Of-Motion Recommendation Toward Brain Needle Intervention Using Deep Reinforcement Learning https://youtu.be/Py5Vw_hQryY
Bok Seng Origami-Inspired Snap-Through Bistability in Parallel and Curved Mechanisms through the Inflection of Degree Four Vertexes https://youtu.be/mCBPTVw5_kw
Changsheng A Miniature Manipulator with Variable Stiffness towards Minimally Invasive Transluminal Endoscopic Surgery https://youtu.be/b_zW9MHgG5g
Ruphan Multiphysics Simulation of Magnetically Actuated Robotic Origami Worms  https://youtu.be/UCkLuhoN0ME

7 papers accepted by ICRA2021 including 2 concurrently by RA-L

Congratulations to the following members for the papers accepted by ICRA2021 including 2 concurrently by RA-L:

Xiao et al Magnetically-Connected Modular Reconfigurable Mini-Robotic System with Bilateral Isokinematic Mapping and Fast On-Site Assembly towards Minimally Invasive Procedures https://youtu.be/V1N2OiN43vw (Talk)  https://youtu.be/PNneXuRXgZI (Demo)
Menya Learning Domain Adaptation with Model Calibration for Surgical Report Generation in Robotic Surgery https://youtu.be/Eu-ryJ9OyTM
Godwin Chip-Less Wireless Sensing of Origami Structural Morphing under Various Mechanical Stimuli Using Home-Based Ink-Jet Printable Materials https://youtu.be/HXgW2S21OGI
Huxin Remote-Center-Of-Motion Recommendation Toward Brain Needle Intervention Using Deep Reinforcement Learning https://youtu.be/Py5Vw_hQryY
Bok Seng Origami-Inspired Snap-Through Bistability in Parallel and Curved Mechanisms through the Inflection of Degree Four Vertexes https://youtu.be/mCBPTVw5_kw
Changsheng A Miniature Manipulator with Variable Stiffness towards Minimally Invasive Transluminal Endoscopic Surgery https://youtu.be/b_zW9MHgG5g
Ruphan Multiphysics Simulation of Magnetically Actuated Robotic Origami Worms  https://youtu.be/UCkLuhoN0ME

ICBME 2019 Special Symposium in Surgical Robotics

Themes

  • Computer-Assisted Surgery
  • Flexible Robotics and navigation in Surgery
  • Artificial Intelligence in Robotic Surgery

Date: Dec 11, 2019, EA

Speakers and Topics

1600 – 1615 Robotic Intervention Utilizing Bioengineering Based Therapeutic Methods
Ichiro Sakuma University of Tokyo
1615 – 1630 Augmented Reality for Orthopeadic Surgery
Jaesung Hong Daegu Gyeongbuk Institute of Science and Technology
1630 – 1645
Medical Robot Link Architecture Connected to Smart Cyber Operating Theater (SCOT)
Ken Masamune Tokyo Women’s Medical University
1645 – 1700
Transluminal Robotics with Delicate Continuum and Context Awareness
Hongliang Ren National University of Singapore
1700 – 1715
Robot-Assisted Interventions under Intra-Operative MRI-Based Guidance
Ka-Wai Kwok University of Hong Kong
1715 – 1730
Biomimetic Wrinkled MXene Pressure Sensors towards Collision-Aware Robots
Catherine Cai National University of Singapore


 

SAKUMA, Ichiro, University of Tokyo

  • Robotic Intervention Utilizing Bioengineering Based Therapeutic Methods
  • Biography
    Ichiro Sakuma received B.S., M.S., and Ph.D. degrees in precision machinery engineering from the University of Tokyo, Tokyo, Japan, in 1982, 1984, and 1989, respectively. He was Research Associate (1987), Associate Professor (1991) at the Department of Applied Electronic Engineering, Tokyo Denki University, Saitama, Japan. He was research instructor at Department of Surgery, Baylor College of Medicine, Houston, Texas from 1990 to 1991. He was Associate Professor at Department of Precision Engineering (1998), Associate Professor (1999) and Professor (2001), Graduate School of Frontier Sciences, the University of Tokyo. He is currently a Professor at Department of Bioengineering, Department of Precision Engineering, Director of Medical Device Development and Regulation Research Center, and Vice dean, School of Engineering, the University of Tokyo. He is the immediate past president of Japanese Society for Medical and Biological Engineering (JSMBE) (2014-2016). He is also Deputy Director for Medical Devices, Center for Product Evaluation, Pharmaceuticals and Medical Devices Agency (PMDA)His research interests includes 1) Computer Aided Surgery, 2) Medical Robotics and medical devise for minimally invasive therapy, 3) Analysis of cardiac arrhythmia phenomena and control of arrhythmia, and 4) Regulatory sciences for medical device development.He received various academic awards including, The Japan Society of Computer Aided Surgery, Best Paper Award (2006), Robotic Society of Japan, Best Paper Award (2010, 2015). In 2014, his group’s research was selected in 2014 as one of the most exciting peer-reviewed optics research to have emerged over the past 12 months by Photonics and Optics News (OSA)

 

Ken Masamune, Tokyo Women’s Medical University

  • Medical Robot Link Architecture Connected to Smart Cyber Operating Theater (SCOT)
  • Abstract
    Nowadays, several medical devices/systems including imaging machine, anesthesia, navigation system, biomonitoring devices, surgical bed, medical robots, et al., are installed in the operation room, however, it is unpleasant situation that all devices are performed in stand-alone mode, without time-synchronization, and it is difficult to combine/analyze some set of information from devices to make surgeon’s decision during surgery. To improve this situation, we’ve been developing an integrated operating room named “Smart Cyber Operating Theater (SCOT) with middleware ORiN system. In this presentation, we introduce a current SCOT project and the design concept of new open platform architecture for the integration of master/slave robotic devices and information guided robot especially for oral and maxillofacial surgery. This design will accelerate the development of any types of robotic interfaces/end effectors with fast validation.
  • Biography
    Ken Masamune received the Ph.D. degree in precision machinery engineering from the University of Tokyo, Japan, in 1999. From 1995 to 1999, he was a Research Associate in the Department of Precision Machinery Engineering, the University of Tokyo. From 2000 to 2004, he was an Assistant Professor in the Department of Biotechnology, Tokyo Denki University, Tokyo. Since 2005, he has been an Associate Professor in the Department of Mechanoinformatics, Graduate School of Information Science and Technology, the University of Tokyo. His current research interests include computer-aided surgery, especially medical robotics and visualization devices and systems for surgery.

 

Jaesung Hong, Daegu Gyeongbuk Institute of Science and Technology

  • AUGMENTED REALITY FOR SURGICAL NAVIGATION
  • Abstract
    In these days, augmented reality (AR) has become a key technology for surgical navigation. Using the AR technology, the shape of invisible organs are overlapped to the visible endoscopic or microscopic images. Therefore the surgeon can avoid damaging the healthy tissue, and reduce the incision area. In the AR-based surgery, optical tracker and camera are generally used. Optical tracker can measure the position and pose of multiple markers, and the relationship between the camera and target organs of patient can be measured in real-time by tracking of the markers which are mounted on the camera body and the patient. In the AR display, finding the relationship between the optical marker mounted on the camera body and the center of camera (camera registration) is particulary important. This relationship strongly affects the overall accuracy of AR display. In this talk, the latest AR technologies applied for the surgical navigation are introduced.
  • Biography
    Jaesung Hong is an associate professor and the Department Chair of Robotics Engineering at the Daegu Gyeongbuk Institute of Science and Technology (DGIST), South Korea. His research area is medical imaging and medical robotics for minimally invasive surgery.
    At the University of Tokyo, he has developed the world first US-guided needle insertion robot tracking a movable and deformable organ. This was reported in Physics in Medicine and Biololgy in 2004, and has been frequently cited (> 160). While he worked at Kyushu University Hospital in Japan, he developed various customized surgical navigation systems, which were clinically applied in approximately 120 surgeries. These included percutaneous ablation therapies for liver tumors, cochlear implant surgeries, neurosurgeries for gliomas, and dental implant surgeries.
    After moving to DGIST which is a research-oriented special university supported by Korean government, he developed a single port surgery robot and its master device for high force transimisson and large workspace as well as a portable, AR-based surgical navigation system, which has been tested in tibia tumor resections and orthognathic surgeries in collaboration with major Korean hospitals, including the Seoul National University Hospital of Bundang, Samsung Seoul Hospital, etc. He is one of a small number of specialists who is familiar with both engineering and clinical medicine.
    Until 2016, Prof. Hong has published approximately 42 journal papers including 30 SCI/SCIE papers with impact factors. Six of them are top 10% ranked journal papers. He also submitted or registered 15 domestic and 7 international patents. He has received 9 best paper/presentation awards, in addition to obtaining 10 research funds amounting to approximately 4.5M USD (including planned budgets).

 

Ka-Wai Kwok, University of Hong Kong

  • Robot-Assisted Interventions under Intra-Operative MRI-Based Guidance
  • Abstract
    Advanced surgical robotics has attracted significant research interest in supporting image guidance, even magnetic resonance imaging (MRI) for effective navigation of surgical instruments. In situ effective guidance of access routes to the target anatomy, rendered based on imaging data, can enable a distinct awareness of the position of robotic instrument tip relative to the target anatomy in various types of minimally invasive interventions. Therefore, such MRI-guided robots will rely on real-time processing the co-registration of surgical plan with the imaging data captured during the intervention, as well as computing the relative configuration between the instrument and the anatomy of surgical interest.
    This talk will present a compact robotic system capable to operate inside the bore of MRI scanner, as well as its solutions to technical challenges of providing a safe, effective catheter-based surgical manipulation. The proposed image processing system demonstrates its clinical potential of enhanced surgical safety by imposing visual feedback on tele-operated robotic instruments even under large-scale and rapid tissue deformations in soft tissue surgeries, such as cardiac electrophysiology and stereotactic neurosurgery. The ultimate research objective is to enable the operator to perform safe, precise and effective control of robotics instruments with the aid of pre- and intra-operative MRI models. The present work will be timely to bridge the current technical gap between MRI and surgical robotic control.
  • Biography
    Dr. Ka-Wai Kwok is currently assistant professor in Department of Mechanical Engineering, The University of Hong Kong, who completed his PhD training in The Hamlyn Centre for Robotic Surgery, Imperial College London in 2011, where he continued research on surgical robotics as a postdoctoral fellow. After then, Dr. Kwok obtained the Croucher Foundation Fellowship 2013-14, which supported his research jointly hosted by The University of Georgia, and Brigham and Women’s Hospital – Harvard Medical School. His research interests focus on surgical robotics, intra-operative medical image processing, and their uses of high-performance computing techniques. To date, he has been involved in various designs of surgical robotic devices and interfaces for endoscopy, laparoscopy, stereotactic and intra-cardiac catheter interventions. His work has also been recognized by several awards from IEEE international conferences, including ICRA’14, IROS’13 and FCCM’11. He also became the recipient of Early Career Awards 2015/16 offered by Research Grants Council (RGC) of Hong Kong.

 

Hongliang Ren, National University of Singapore, Singapore

  • Transluminal Robotics with Delicate Continuum and Context Awareness
  • Biography
    Dr. Hongliang Ren is currently an assistant professor and leading a research group on medical mechatronics in the Biomedical Engineering Department of National University of Singapore (NUS). He is an affiliated Principal Investigator for the Singapore Institute of Neurotechnology (SINAPSE) and Advanced Robotics Center at National University of Singapore. Dr. Ren received his PhD in Electronic Engineering (Specialized in Biomedical Engineering) from The Chinese University of Hong Kong (CUHK) in 2008. After his graduation, he worked as a Research Fellow in the Laboratory for Computational Sensing and Robotics (LCSR) and the Engineering Center for Computer-Integrated Surgical Systems and Technology (ERC-CISST), Department of Biomedical Engineering and Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA, from 2008 to 2010. In 2010, he joined the Pediatric Cardiac Biorobotics Lab, Department of Cardiovascular Surgery, Children’s Hospital Boston & Harvard Medical School, USA, for investigating the beating heart robotic surgery system. Prior to joining NUS, he also worked in 2012 on a collaborative computer integrated surgery project, at the Surgical Innovation Institute of Children’s National Medical Center, USA. His main areas of interest include Biomedical Mechatronics, Computer-Integrated Surgery, and Dynamic Positioning in Medicine.

 

Catherine Cai, National University of Singapore, Singapore

  • Biomimetic Wrinkled MXene Pressure Sensors towards Collision-Aware Robots
  • Abstract
    The use of surgical robots in the field of minimally invasive neurosurgical procedures can offer several benefits and advantages. However, the lack of force sensing hinders and limits their use in such procedures. Equipping surgical robots with pressure sensors can enhance robot-environment interaction by enabling collision awareness and enhance human-robot interactions by providing surgeons the necessary force feedback for safe tissue manipulation. With the emergence of soft robotics in biomedical applications, the attached pressure sensors are required to be flexible and stretchable in order to comply with the mechanically dynamic robotic movements and deformations. Inspired by the multi-dimensional wrinkles of Shar-Pei dog’s skin, we have fabricated a flexible and stretchable piezoresistive pressure sensor consisting of MXene electrodes with biomimetic topographies. This pressure sensor is found to be more sensitive in low-pressure regimes (0.934 kPa-1, <236 Pa), and less sensitive in the higher pressure regimes (0.188 kPa-1, <2070 Pa).