Research Positions & PhD Scholarships on Robotics, AI, Perception

[RESEARCH AREA]
There are multiple openings for Ph.D. scholarships to perform research on Medical Robotics Perception & AI at The Chinese University of Hong Kong (CUHK). Particularly, the main areas of interest include biorobotics & intelligent systems, medical mechatronics, continuum, and soft flexible robots and sensors, multisensory perception, AI learning and control in image-guided procedures, deployable motion generation, compliance modulation/sensing, cooperative and context-aware sensors/actuators in human environments. For more details, please refer to the recent publications at Google Scholar or lab website http://labren.org/.
The prospective researchers/students will have opportunities to work with an interdisciplinary team consists of clinicians and researchers from robotics, AI & perception, imaging, and medicine.

[QUALIFICATIONS]
* Honorary Bachelor or Master Degree in Electronic and Computer Engineering (ECE), robotics, medical physics, automation, or mechatronics background
* Self-motivated and preferably with strong academic records (obtained a minimum CGPA of 3.5 out of 4.0 or equivalent)
* Outstanding academic experience (eg reputable research publications, National Scholarships) or recognitions from a top university
* You might be eligible for various Ph.D. Scholarship Schemes depending on your qualifications, so reach Prof Ren with background details
* If you already obtained a Ph.D. admission offer from a top overseas university (eg QS or THE top 50 universities) for 2021-22, you might be eligible for the Vice-Chancellor’s Ph.D. Scholarship Scheme 

[HOW TO APPLY]
Qualified candidates are invited to express their interests through an email with detailed supporting documents (including CV, transcripts, objective, research interests, education background, experiences, GPA, representative publications, demo projects), and if applicable, a copy of an offer letter from the top overseas university if available, to Prof. Hongliang Ren email: hlren@ieee.org asap.

Related websites:
Lab website: http://labren.org/
CUHK EE Ph.D. Program: http://www.ee.cuhk.edu.hk/en-gb/curriculum/mphil-phd-programme/admission
CUHK Vice-Chancellor’s Ph.D. Scholarship Scheme: https://www.gs.cuhk.edu.hk/admissions/scholarships-fees/scholarships#vcphd
CUHK HKPFS Hong Kong PhD Fellowship Scheme: monthly stipend of HK$26,600 (approx. US$3,410)

Team presentations at pre-ICRA2021

Congratulations to the following members for the papers accepted by ICRA2021 and presentations at pre-ICRA2021:

Menya Learning Domain Adaptation with Model Calibration for Surgical Report Generation in Robotic Surgery https://youtu.be/Eu-ryJ9OyTM
Godwin Chip-Less Wireless Sensing of Origami Structural Morphing under Various Mechanical Stimuli Using Home-Based Ink-Jet Printable Materials https://youtu.be/HXgW2S21OGI
Huxin Remote-Center-Of-Motion Recommendation Toward Brain Needle Intervention Using Deep Reinforcement Learning https://youtu.be/Py5Vw_hQryY
Bok Seng Origami-Inspired Snap-Through Bistability in Parallel and Curved Mechanisms through the Inflection of Degree Four Vertexes https://youtu.be/mCBPTVw5_kw
Changsheng A Miniature Manipulator with Variable Stiffness towards Minimally Invasive Transluminal Endoscopic Surgery https://youtu.be/b_zW9MHgG5g
Ruphan Multiphysics Simulation of Magnetically Actuated Robotic Origami Worms  https://youtu.be/UCkLuhoN0ME

7 papers accepted by ICRA2021 including 2 concurrently by RA-L

Congratulations to the following members for the papers accepted by ICRA2021 including 2 concurrently by RA-L:

Xiao et al Magnetically-Connected Modular Reconfigurable Mini-Robotic System with Bilateral Isokinematic Mapping and Fast On-Site Assembly towards Minimally Invasive Procedures https://youtu.be/V1N2OiN43vw (Talk)  https://youtu.be/PNneXuRXgZI (Demo)
Menya Learning Domain Adaptation with Model Calibration for Surgical Report Generation in Robotic Surgery https://youtu.be/Eu-ryJ9OyTM
Godwin Chip-Less Wireless Sensing of Origami Structural Morphing under Various Mechanical Stimuli Using Home-Based Ink-Jet Printable Materials https://youtu.be/HXgW2S21OGI
Huxin Remote-Center-Of-Motion Recommendation Toward Brain Needle Intervention Using Deep Reinforcement Learning https://youtu.be/Py5Vw_hQryY
Bok Seng Origami-Inspired Snap-Through Bistability in Parallel and Curved Mechanisms through the Inflection of Degree Four Vertexes https://youtu.be/mCBPTVw5_kw
Changsheng A Miniature Manipulator with Variable Stiffness towards Minimally Invasive Transluminal Endoscopic Surgery https://youtu.be/b_zW9MHgG5g
Ruphan Multiphysics Simulation of Magnetically Actuated Robotic Origami Worms  https://youtu.be/UCkLuhoN0ME

ICBME 2019 Special Symposium in Surgical Robotics

Themes

  • Computer-Assisted Surgery
  • Flexible Robotics and navigation in Surgery
  • Artificial Intelligence in Robotic Surgery

Date: Dec 11, 2019, EA

Speakers and Topics

1600 – 1615 Robotic Intervention Utilizing Bioengineering Based Therapeutic Methods
Ichiro Sakuma University of Tokyo
1615 – 1630 Augmented Reality for Orthopeadic Surgery
Jaesung Hong Daegu Gyeongbuk Institute of Science and Technology
1630 – 1645
Medical Robot Link Architecture Connected to Smart Cyber Operating Theater (SCOT)
Ken Masamune Tokyo Women’s Medical University
1645 – 1700
Transluminal Robotics with Delicate Continuum and Context Awareness
Hongliang Ren National University of Singapore
1700 – 1715
Robot-Assisted Interventions under Intra-Operative MRI-Based Guidance
Ka-Wai Kwok University of Hong Kong
1715 – 1730
Biomimetic Wrinkled MXene Pressure Sensors towards Collision-Aware Robots
Catherine Cai National University of Singapore


 

SAKUMA, Ichiro, University of Tokyo

  • Robotic Intervention Utilizing Bioengineering Based Therapeutic Methods
  • Biography
    Ichiro Sakuma received B.S., M.S., and Ph.D. degrees in precision machinery engineering from the University of Tokyo, Tokyo, Japan, in 1982, 1984, and 1989, respectively. He was Research Associate (1987), Associate Professor (1991) at the Department of Applied Electronic Engineering, Tokyo Denki University, Saitama, Japan. He was research instructor at Department of Surgery, Baylor College of Medicine, Houston, Texas from 1990 to 1991. He was Associate Professor at Department of Precision Engineering (1998), Associate Professor (1999) and Professor (2001), Graduate School of Frontier Sciences, the University of Tokyo. He is currently a Professor at Department of Bioengineering, Department of Precision Engineering, Director of Medical Device Development and Regulation Research Center, and Vice dean, School of Engineering, the University of Tokyo. He is the immediate past president of Japanese Society for Medical and Biological Engineering (JSMBE) (2014-2016). He is also Deputy Director for Medical Devices, Center for Product Evaluation, Pharmaceuticals and Medical Devices Agency (PMDA)His research interests includes 1) Computer Aided Surgery, 2) Medical Robotics and medical devise for minimally invasive therapy, 3) Analysis of cardiac arrhythmia phenomena and control of arrhythmia, and 4) Regulatory sciences for medical device development.He received various academic awards including, The Japan Society of Computer Aided Surgery, Best Paper Award (2006), Robotic Society of Japan, Best Paper Award (2010, 2015). In 2014, his group’s research was selected in 2014 as one of the most exciting peer-reviewed optics research to have emerged over the past 12 months by Photonics and Optics News (OSA)

 

Ken Masamune, Tokyo Women’s Medical University

  • Medical Robot Link Architecture Connected to Smart Cyber Operating Theater (SCOT)
  • Abstract
    Nowadays, several medical devices/systems including imaging machine, anesthesia, navigation system, biomonitoring devices, surgical bed, medical robots, et al., are installed in the operation room, however, it is unpleasant situation that all devices are performed in stand-alone mode, without time-synchronization, and it is difficult to combine/analyze some set of information from devices to make surgeon’s decision during surgery. To improve this situation, we’ve been developing an integrated operating room named “Smart Cyber Operating Theater (SCOT) with middleware ORiN system. In this presentation, we introduce a current SCOT project and the design concept of new open platform architecture for the integration of master/slave robotic devices and information guided robot especially for oral and maxillofacial surgery. This design will accelerate the development of any types of robotic interfaces/end effectors with fast validation.
  • Biography
    Ken Masamune received the Ph.D. degree in precision machinery engineering from the University of Tokyo, Japan, in 1999. From 1995 to 1999, he was a Research Associate in the Department of Precision Machinery Engineering, the University of Tokyo. From 2000 to 2004, he was an Assistant Professor in the Department of Biotechnology, Tokyo Denki University, Tokyo. Since 2005, he has been an Associate Professor in the Department of Mechanoinformatics, Graduate School of Information Science and Technology, the University of Tokyo. His current research interests include computer-aided surgery, especially medical robotics and visualization devices and systems for surgery.

 

Jaesung Hong, Daegu Gyeongbuk Institute of Science and Technology

  • AUGMENTED REALITY FOR SURGICAL NAVIGATION
  • Abstract
    In these days, augmented reality (AR) has become a key technology for surgical navigation. Using the AR technology, the shape of invisible organs are overlapped to the visible endoscopic or microscopic images. Therefore the surgeon can avoid damaging the healthy tissue, and reduce the incision area. In the AR-based surgery, optical tracker and camera are generally used. Optical tracker can measure the position and pose of multiple markers, and the relationship between the camera and target organs of patient can be measured in real-time by tracking of the markers which are mounted on the camera body and the patient. In the AR display, finding the relationship between the optical marker mounted on the camera body and the center of camera (camera registration) is particulary important. This relationship strongly affects the overall accuracy of AR display. In this talk, the latest AR technologies applied for the surgical navigation are introduced.
  • Biography
    Jaesung Hong is an associate professor and the Department Chair of Robotics Engineering at the Daegu Gyeongbuk Institute of Science and Technology (DGIST), South Korea. His research area is medical imaging and medical robotics for minimally invasive surgery.
    At the University of Tokyo, he has developed the world first US-guided needle insertion robot tracking a movable and deformable organ. This was reported in Physics in Medicine and Biololgy in 2004, and has been frequently cited (> 160). While he worked at Kyushu University Hospital in Japan, he developed various customized surgical navigation systems, which were clinically applied in approximately 120 surgeries. These included percutaneous ablation therapies for liver tumors, cochlear implant surgeries, neurosurgeries for gliomas, and dental implant surgeries.
    After moving to DGIST which is a research-oriented special university supported by Korean government, he developed a single port surgery robot and its master device for high force transimisson and large workspace as well as a portable, AR-based surgical navigation system, which has been tested in tibia tumor resections and orthognathic surgeries in collaboration with major Korean hospitals, including the Seoul National University Hospital of Bundang, Samsung Seoul Hospital, etc. He is one of a small number of specialists who is familiar with both engineering and clinical medicine.
    Until 2016, Prof. Hong has published approximately 42 journal papers including 30 SCI/SCIE papers with impact factors. Six of them are top 10% ranked journal papers. He also submitted or registered 15 domestic and 7 international patents. He has received 9 best paper/presentation awards, in addition to obtaining 10 research funds amounting to approximately 4.5M USD (including planned budgets).

 

Ka-Wai Kwok, University of Hong Kong

  • Robot-Assisted Interventions under Intra-Operative MRI-Based Guidance
  • Abstract
    Advanced surgical robotics has attracted significant research interest in supporting image guidance, even magnetic resonance imaging (MRI) for effective navigation of surgical instruments. In situ effective guidance of access routes to the target anatomy, rendered based on imaging data, can enable a distinct awareness of the position of robotic instrument tip relative to the target anatomy in various types of minimally invasive interventions. Therefore, such MRI-guided robots will rely on real-time processing the co-registration of surgical plan with the imaging data captured during the intervention, as well as computing the relative configuration between the instrument and the anatomy of surgical interest.
    This talk will present a compact robotic system capable to operate inside the bore of MRI scanner, as well as its solutions to technical challenges of providing a safe, effective catheter-based surgical manipulation. The proposed image processing system demonstrates its clinical potential of enhanced surgical safety by imposing visual feedback on tele-operated robotic instruments even under large-scale and rapid tissue deformations in soft tissue surgeries, such as cardiac electrophysiology and stereotactic neurosurgery. The ultimate research objective is to enable the operator to perform safe, precise and effective control of robotics instruments with the aid of pre- and intra-operative MRI models. The present work will be timely to bridge the current technical gap between MRI and surgical robotic control.
  • Biography
    Dr. Ka-Wai Kwok is currently assistant professor in Department of Mechanical Engineering, The University of Hong Kong, who completed his PhD training in The Hamlyn Centre for Robotic Surgery, Imperial College London in 2011, where he continued research on surgical robotics as a postdoctoral fellow. After then, Dr. Kwok obtained the Croucher Foundation Fellowship 2013-14, which supported his research jointly hosted by The University of Georgia, and Brigham and Women’s Hospital – Harvard Medical School. His research interests focus on surgical robotics, intra-operative medical image processing, and their uses of high-performance computing techniques. To date, he has been involved in various designs of surgical robotic devices and interfaces for endoscopy, laparoscopy, stereotactic and intra-cardiac catheter interventions. His work has also been recognized by several awards from IEEE international conferences, including ICRA’14, IROS’13 and FCCM’11. He also became the recipient of Early Career Awards 2015/16 offered by Research Grants Council (RGC) of Hong Kong.

 

Hongliang Ren, National University of Singapore, Singapore

  • Transluminal Robotics with Delicate Continuum and Context Awareness
  • Biography
    Dr. Hongliang Ren is currently an assistant professor and leading a research group on medical mechatronics in the Biomedical Engineering Department of National University of Singapore (NUS). He is an affiliated Principal Investigator for the Singapore Institute of Neurotechnology (SINAPSE) and Advanced Robotics Center at National University of Singapore. Dr. Ren received his PhD in Electronic Engineering (Specialized in Biomedical Engineering) from The Chinese University of Hong Kong (CUHK) in 2008. After his graduation, he worked as a Research Fellow in the Laboratory for Computational Sensing and Robotics (LCSR) and the Engineering Center for Computer-Integrated Surgical Systems and Technology (ERC-CISST), Department of Biomedical Engineering and Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA, from 2008 to 2010. In 2010, he joined the Pediatric Cardiac Biorobotics Lab, Department of Cardiovascular Surgery, Children’s Hospital Boston & Harvard Medical School, USA, for investigating the beating heart robotic surgery system. Prior to joining NUS, he also worked in 2012 on a collaborative computer integrated surgery project, at the Surgical Innovation Institute of Children’s National Medical Center, USA. His main areas of interest include Biomedical Mechatronics, Computer-Integrated Surgery, and Dynamic Positioning in Medicine.

 

Catherine Cai, National University of Singapore, Singapore

  • Biomimetic Wrinkled MXene Pressure Sensors towards Collision-Aware Robots
  • Abstract
    The use of surgical robots in the field of minimally invasive neurosurgical procedures can offer several benefits and advantages. However, the lack of force sensing hinders and limits their use in such procedures. Equipping surgical robots with pressure sensors can enhance robot-environment interaction by enabling collision awareness and enhance human-robot interactions by providing surgeons the necessary force feedback for safe tissue manipulation. With the emergence of soft robotics in biomedical applications, the attached pressure sensors are required to be flexible and stretchable in order to comply with the mechanically dynamic robotic movements and deformations. Inspired by the multi-dimensional wrinkles of Shar-Pei dog’s skin, we have fabricated a flexible and stretchable piezoresistive pressure sensor consisting of MXene electrodes with biomimetic topographies. This pressure sensor is found to be more sensitive in low-pressure regimes (0.934 kPa-1, <236 Pa), and less sensitive in the higher pressure regimes (0.188 kPa-1, <2070 Pa).

Surgical Manipulator based on Parallel Mechanism

Abstract

We presents a 5-DOF manipulator which consists of three parts, 1-DOF translational joint, a bendable skeleton (2-DOF for Omni-directional bending motion), and a rotatable forceps gripper (1-DOF for rotation, 1-DOF for opening/closing). The bendable segment in the manipulator achieves two orthogonal bending DOFs by pulling or pushing three parallel universal-joint-based shaft chains. Forward and inverse kinematics of the bendable skeleton is analyzed. The workspace calculation illustrates that the structure of the three parallel shaft chains can reach a bending angle of 90 degree in arbitrarily direction. The reachability of the manipulator is simulated in Adams. According to the surgical requirements, the manipulator is actuated to draw circle during the tests while the end effector is kept bending at 60 degree. The results show that the end effector can precisely track the planning trajectory (precision within 1 mm).

Video demo

Publications

Q. Liu; J. CHEN; S. Shen; B. Zhang; M. G. Fujie; C. M. Lim & H. REN Design, Kinematics, Simulation of Omni-directional Bending Reachability for a Parallel Structure Forceps Manipulator BioRob2016, 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, June 26-29, 2016, Singapore, 201

Cardioscope

Abstract:

We present a novel flexible endoscope (FE) which is well suited to minimally invasive cardiac surgery (MICS). It is named the cardioscope. The cardioscope is composed of a handle, a rigid shaft, a steerable flexible section, and the imaging system. The flexible section is composed of an elastic tube, a number of spacing disks, a constraint tube, and four wires. It employs the constrained wire-driven flexible mechanism (CWFM) with a continuum backbone, which enables the control of both the angulation and the length of the flexible section. Compared to other endoscopes, e.g., rigid endoscope (RE) and fixed-length FE, the cardioscope is much more dexterous. The cardioscope can bend over 180 deg in all directions, and the bending is decoupled from the distal tip position. Ex vivo tests show that the cardioscope is well suited to MICS. It provides much wider scope of vision than REs and provides good manipulation inside confined environments. In our tests, the cardioscope successfully explored the full heart through a single hole, which shows that the design is promising. Despite being designed for MICS, the cardioscope can also be applied to other minimally invasive surgeries (MISs), such as laparoscopy, neurosurgery, transnasal surgery, and transoral surgery.

Demo video

Publications

  • Z. Li; M. Zin Oo; V. Nalam; V. Duc Thang; H. Ren; T. Kofidis & H. Yu Design of a Novel Flexible Endoscope- Cardioscope Journal of Mechanisms and Robotics, ASME, 2016, 8, 051014-051014
  • Z. Li; M. Z. Oo; V. D. Thang; V. Nalam; T. Kofidis; H. Yu & H. Ren Design of a Novel Flexible Endoscope – Cardioscope 2015 IDETC: ASME 2015 International Design Engineering Technical Conferences , 2015

ICBME 2016 Special Symposium in surgical navigation and robotics

Themes

  • Computer-Assisted Surgery
  • Flexible Robotics and navigation in Surgery
  • Artificial Intelligence in Robotic Surgery

Date: Dec 7, 2016 U-Town

Gallery

https://goo.gl/photos/vCNqYfWmD26UF41j7
   
   

Speakers and Topics


sakuma

Keynote speaker: SAKUMA, Ichiro, University of Tokyo

  • COMPUTER AIDED SURGERY FOR ASSISTING MINIMALLY INVASIVE THERAPIES
  • Abstract
    Minimally invasive therapy such as endoscopic surgery and catheter based intervention are being spread in many surgical intervention fields. Thus engineering assistance is important to realize safe and effective minimally invasive therapy. Computer Assisted Surgical guidance such as surgical navigation is one of key technologies. It is expected that application of robotic technology to minimally invasive surgery will provide the following functions:
    (1) Precise manipulation of biological tissues and surgical instruments in narrow and confined surgical field.
    (2) Precise and accurate localization of therapeutic devices using various pre and intra-operative medical information.In the first mode of application, more compact system is required. It can be realized by introduction of novel mechanical design and application off a new mechanism as well as new materials. At the same time integration with various energy devices are also required. Intra-operative guidance utilizing various pre and intra operative information is necessary in the second mode of application. Image guided robotic system for RF ablation, laser ablation, intensity modified radiation therapy, and high intensity focused ultrasound. In this type of robot, various preand intraoperative information including functional information is used to navigate the therapeutic devices to the target lesion. Intra-operative identification of pathological state of the target tissue and evaluation of outcome after therapeutic intervention are also important.Factors limiting the application of surgical navigation systems and medical robotics include limited usability requiring additional procedures for preparation, and high costs. Recent progress of computer vision technologies will solve part of these issues.For wider spread of these technologies in clinical environment, further improvement of usability, cost reduction, and accumulation of clinical evidences demonstrating efficacy from both clinical and economical point of view are required.
  • Biography
    Ichiro Sakuma received B.S., M.S., and Ph.D. degrees in precision machinery engineering from the University of Tokyo, Tokyo, Japan, in 1982, 1984, and 1989, respectively. He was Research Associate (1987), Associate Professor (1991) at the Department of Applied Electronic Engineering, Tokyo Denki University, Saitama, Japan. He was research instructor at Department of Surgery, Baylor College of Medicine, Houston, Texas from 1990 to 1991. He was Associate Professor at Department of Precision Engineering (1998), Associate Professor (1999) and Professor (2001), Graduate School of Frontier Sciences, the University of Tokyo. He is currently a Professor at Department of Bioengineering, Department of Precision Engineering, Director of Medical Device Development and Regulation Research Center, and Vice dean, School of Engineering, the University of Tokyo. He is the immediate past president of Japanese Society for Medical and Biological Engineering (JSMBE) (2014-2016). He is also Deputy Director for Medical Devices, Center for Product Evaluation, Pharmaceuticals and Medical Devices Agency (PMDA)His research interests includes 1) Computer Aided Surgery, 2) Medical Robotics and medical devise for minimally invasive therapy, 3) Analysis of cardiac arrhythmia phenomena and control of arrhythmia, and 4) Regulatory sciences for medical device development.He received various academic awards including, The Japan Society of Computer Aided Surgery, Best Paper Award (2006), Robotic Society of Japan, Best Paper Award (2010, 2015). In 2014, his group’s research was selected in 2014 as one of the most exciting peer-reviewed optics research to have emerged over the past 12 months by Photonics and Optics News (OSA)

masamuneken

Ken Masamune, Tokyo Women’s Medical University

  • OPEN PLATFORM OF MEDICAL ROBOTS/ DEVICES WITH SMART CYBER OPERATING THEATER (SCOT): DESIGN CONCEPT AND PROTOTYPE ROBOT DEVELOPMENTS
  • Abstract
    Nowadays, several medical devices/systems including imaging machine, anesthesia, navigation system, biomonitoring devices, surgical bed, medical robots, et al., are installed in the operation room, however, it is unpleasant situation that all devices are performed in stand-alone mode, without time-synchronization, and it is difficult to combine/analyze some set of information from devices to make surgeon’s decision during surgery. To improve this situation, we’ve been developing an integrated operating room named “Smart Cyber Operating Theater (SCOT) with middleware ORiN system. In this presentation, we introduce a current SCOT project and the design concept of new open platform architecture for the integration of master/slave robotic devices and information guided robot especially for oral and maxillofacial surgery. This design will accelerate the development of any types of robotic interfaces/end effectors with fast validation.
  • Biography
    Ken Masamune received the Ph.D. degree in precision machinery engineering from the University of Tokyo, Japan, in 1999. From 1995 to 1999, he was a Research Associate in the Department of Precision Machinery Engineering, the University of Tokyo. From 2000 to 2004, he was an Assistant Professor in the Department of Biotechnology, Tokyo Denki University, Tokyo. Since 2005, he has been an Associate Professor in the Department of Mechanoinformatics, Graduate School of Information Science and Technology, the University of Tokyo. His current research interests include computer-aided surgery, especially medical robotics and visualization devices and systems for surgery.

jshongdgist

Jaesung Hong, Daegu Gyeongbuk Institute of Science and Technology

  • AUGMENTED REALITY FOR SURGICAL NAVIGATION
  • Abstract
    In these days, augmented reality (AR) has become a key technology for surgical navigation. Using the AR technology, the shape of invisible organs are overlapped to the visible endoscopic or microscopic images. Therefore the surgeon can avoid damaging the healthy tissue, and reduce the incision area. In the AR-based surgery, optical tracker and camera are generally used. Optical tracker can measure the position and pose of multiple markers, and the relationship between the camera and target organs of patient can be measured in real-time by tracking of the markers which are mounted on the camera body and the patient. In the AR display, finding the relationship between the optical marker mounted on the camera body and the center of camera (camera registration) is particulary important. This relationship strongly affects the overall accuracy of AR display. In this talk, the latest AR technologies applied for the surgical navigation are introduced.
  • Biography
    Jaesung Hong is an associate professor and the Department Chair of Robotics Engineering at the Daegu Gyeongbuk Institute of Science and Technology (DGIST), South Korea. His research area is medical imaging and medical robotics for minimally invasive surgery.
    At the University of Tokyo, he has developed the world first US-guided needle insertion robot tracking a movable and deformable organ. This was reported in Physics in Medicine and Biololgy in 2004, and has been frequently cited (> 160). While he worked at Kyushu University Hospital in Japan, he developed various customized surgical navigation systems, which were clinically applied in approximately 120 surgeries. These included percutaneous ablation therapies for liver tumors, cochlear implant surgeries, neurosurgeries for gliomas, and dental implant surgeries.
    After moving to DGIST which is a research-oriented special university supported by Korean government, he developed a single port surgery robot and its master device for high force transimisson and large workspace as well as a portable, AR-based surgical navigation system, which has been tested in tibia tumor resections and orthognathic surgeries in collaboration with major Korean hospitals, including the Seoul National University Hospital of Bundang, Samsung Seoul Hospital, etc. He is one of a small number of specialists who is familiar with both engineering and clinical medicine.
    Until 2016, Prof. Hong has published approximately 42 journal papers including 30 SCI/SCIE papers with impact factors. Six of them are top 10% ranked journal papers. He also submitted or registered 15 domestic and 7 international patents. He has received 9 best paper/presentation awards, in addition to obtaining 10 research funds amounting to approximately 4.5M USD (including planned budgets).

KwokKaWai

Ka-Wai Kwok, University of Hong Kong

  • MR-COMPATIBLE ROBOTIC SYSTEMS: TOWARDS THE INTRAOPERATIVE MRI-GUIDED INTERVENTIONS
  • Abstract
    Advanced surgical robotics has attracted significant research interest in supporting image guidance, even magnetic resonance imaging (MRI) for effective navigation of surgical instruments. In situ effective guidance of access routes to the target anatomy, rendered based on imaging data, can enable a distinct awareness of the position of robotic instrument tip relative to the target anatomy in various types of minimally invasive interventions. Therefore, such MRI-guided robots will rely on real-time processing the co-registration of surgical plan with the imaging data captured during the intervention, as well as computing the relative configuration between the instrument and the anatomy of surgical interest.
    This talk will present a compact robotic system capable to operate inside the bore of MRI scanner, as well as its solutions to technical challenges of providing a safe, effective catheter-based surgical manipulation. The proposed image processing system demonstrates its clinical potential of enhanced surgical safety by imposing visual feedback on tele-operated robotic instruments even under large-scale and rapid tissue deformations in soft tissue surgeries, such as cardiac electrophysiology and stereotactic neurosurgery. The ultimate research objective is to enable the operator to perform safe, precise and effective control of robotics instruments with the aid of pre- and intra-operative MRI models. The present work will be timely to bridge the current technical gap between MRI and surgical robotic control.
  • Biography
    Dr. Ka-Wai Kwok is currently assistant professor in Department of Mechanical Engineering, The University of Hong Kong, who completed his PhD training in The Hamlyn Centre for Robotic Surgery, Imperial College London in 2011, where he continued research on surgical robotics as a postdoctoral fellow. After then, Dr. Kwok obtained the Croucher Foundation Fellowship 2013-14, which supported his research jointly hosted by The University of Georgia, and Brigham and Women’s Hospital – Harvard Medical School. His research interests focus on surgical robotics, intra-operative medical image processing, and their uses of high-performance computing techniques. To date, he has been involved in various designs of surgical robotic devices and interfaces for endoscopy, laparoscopy, stereotactic and intra-cardiac catheter interventions. His work has also been recognized by several awards from IEEE international conferences, including ICRA’14, IROS’13 and FCCM’11. He also became the recipient of Early Career Awards 2015/16 offered by Research Grants Council (RGC) of Hong Kong.

RenHongliang-BME-NUS

Hongliang Ren, National University of Singapore, Singapore

  • TOWARDS MAGNETIC ACTUATED MICROROBOTIC NEEDLESS INJECTION
  • Abstract
    The feasibility of a needleless magnetic-actuated device for the purpose of intravitreal injections is investigated using three different design prototypes.
    A needleless device could potentially significantly reduce patient anxiety levels and occurrences of needle stick injuries to both healthcare workers and patients Moreover, a magnetic-actuated device allows for control of the current supplied over time to the device and the corresponding depth of penetration of the drug.
    Substitutes for the sclera and vitreous region were used in the experiments where a blue dye was injected using the two separate devices to identify if these devices were able to eject the liquid with enough force needed to penetrate the sclera and deliver the liquid to within the vitreous region and whether there was a relationship between the current supplied to the devices and the depth of delivery The solenoid prototype injector was not able to eject the liquid at a force required to penetrate the sclera although, because the vitreous region was a lot softer, a follow through current was predicted to be able to deliver the bulk of the liquid to the middle portion of the vitreous substitute used in this experiment. Moreover, the addition of a controller to the system was able to produce a two part force to the liquid, the initial peak force meant to penetrate the sclera and a follow through force to deliver the drug to the vitreous region only.
  • Biography
    Dr. Hongliang Ren is currently an assistant professor and leading a research group on medical mechatronics in the Biomedical Engineering Department of National University of Singapore (NUS). He is an affiliated Principal Investigator for the Singapore Institute of Neurotechnology (SINAPSE) and Advanced Robotics Center at National University of Singapore. Dr. Ren received his PhD in Electronic Engineering (Specialized in Biomedical Engineering) from The Chinese University of Hong Kong (CUHK) in 2008. After his graduation, he worked as a Research Fellow in the Laboratory for Computational Sensing and Robotics (LCSR) and the Engineering Center for Computer-Integrated Surgical Systems and Technology (ERC-CISST), Department of Biomedical Engineering and Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA, from 2008 to 2010. In 2010, he joined the Pediatric Cardiac Biorobotics Lab, Department of Cardiovascular Surgery, Children’s Hospital Boston & Harvard Medical School, USA, for investigating the beating heart robotic surgery system. Prior to joining NUS, he also worked in 2012 on a collaborative computer integrated surgery project, at the Surgical Innovation Institute of Children’s National Medical Center, USA. His main areas of interest include Biomedical Mechatronics, Computer-Integrated Surgery, and Dynamic Positioning in Medicine.

Tele-Operation and Active Visual Servoing of a Compact Portable Continuum Tubular Robot

Demo Videos

– tele-operation, visual servoing and hybrid control (Summery)

– EIH VS in free space – EIH VS inside a skull model

– Eye-to-hand Visual Servoing

Project goals

Trans-orifice minimally invasive procedures have received more and more attention because of the advantages of lower infection risks, minimal scarring and shorter recovery time. Due to the ability of retaining force transmission and great dexterity, continuum tubular robotic technology has gained ever increasing attention in minimally invasive surgeries. The objective of the project is to design a compact and portable continuum tubular robot for transnasal procedures. Several control modes of the robot, including tele-operation, visual servoing and hybrid control, have been proposed so that the robot is allowed to accomplish different tasks in constrained surgical environments.

Approaches

Driven by the need for compactness and portability, we have developed a continuum tubular robot which is 35 cm in length, 10 cm in diameter, 2.15 kg in weight, and easy to be integrated with a micro-robotic arm to perform complicated operations as shown in Fig. 1. Comprehensive studies of both the kinematics and workspace of the prototype have been carried out.
figsetup-2jbxtci
Fig. 1. Prototype of the proposed continuum tubular robot.
The workspace varies on different configuration of DOFs as well as the initial parameters of the tube pairs. The outer tubes in the following cases are all assumed to be dominating the inner tubes in stiffness. Calculation of the workspace relies on the forward kinematics of the robot, and considers the motion constraints imposed by the structure. The workspaces of the 4-DOF robot with three different initial configurations are compared in Fig. 2 (Left). Since the spatial workspace has rotational symmetry, only the sectional workspace is displayed.
fig2workspace         fig3-3dof
Fig. 2. Workspace comparison for 4-DOF CTR (Left) & 3-DOF CTR (Right) with three initial configurations. Top: all the outstretched part of the inner tube exposes; Middle: the outstretched part of the inner tube is partially covered by the outer tube; Bottom: the outstretched part of the inner tube is totally covered by the outer tube.
When the outer tube is straight, its rotation does not change either the position or the orientation of the tip. In this case, the robot degenerates to a 3-DOF one. Although decrease of DOF will weaken the dexterity of the robot, this configuration has its own advantages in some surgical applications. Take transnasal surgeries for example, as the nostril passage is generally straight, an unbent outer tube will facilitate the robot to get through at the beginning. Similar analysis of workspace is performed on the 3-DOF CTR with three different initial configurations, as shown in Fig. 2 (right). With different initial configuration, the workspaces also present different shapes.
In addition, tele-operation of the robot is achieved using a haptic input device developed for 3D position control. A novel eye-in-hand active visual servoing system is also proposed for the robot to resist unexpected perturbations automatically and deliver surgical tools actively in constrained environments. Finally, a hybrid control strategy combining teleoperation and visual servoing is investigated. Various experiments are conducted to evaluate the performance of the continuum tubular robot and the feasibility and effectiveness of the proposed tele-operation and visual servoing control modes in transnasal surgeries.

Related Publications

1. Liao Wu, Keyu Wu, Li Ting Lynette Teo, and Hongliang Ren, “Tele-Operation and Active Visual Servoing of a Compact Portable Continuum Tubular Robot in Constrained Environments”, Mechatronics, IEEE/ASME Transactions on (submitted)
2. Keyu Wu, Liao Wu and Hongliang Ren, “An Image Based Targeting Method to Guide a Tentacle-like Curvilinear Concentric Tube Robot”, ROBIO 2014, IEEE International Conference on Robotics and Biomimetics, 2014.

People involved

Staff: Liao Wu
Student: Keyu Wu
PI: Hongliang Ren

Deprecated videos FYI only

– tele-operation, visual servoing and hybrid control

– Eye-to-hand Visual Servoing

– Eye-in-hand Visual Servoing in free space

–Tele-operation of the compact tubular robot

– Eye-in-hand Visual Servoing inside a skull model

Simultaneous Hand-Eye, Tool-Flange and Robot-Robot Calibration for Co-manipulators by Solving AXB=YCZ Problem

Abstract

Multi-robot co-manipulation shows great potential to address the limitations of using single robot in complicated tasks such as robotic surgeries. However, the dynamic setup poses great uncertainties in the circumstances of robotic mobility and unstructured environment. Therefore, the relationships among all the base frames (robot-robot calibration) and the relationships between the end-effectors and the other devices such as cameras (hand-eye calibration) and tools (tool-flange calibration) have to be determined constantly in order to enable robotic cooperation in the constantly changing environment. We formulated the problem of hand-eye, tool-flange and robot-robot calibration to a matrix equation AXB=YCZ. A series of generic geometric properties and lemmas were presented, leading to the derivation of the final simultaneous algorithm. In addition to the accurate iterative solution, a closed-form solution was also introduced based on quaternions to give an initial value. To show the feasibility and superiority of the simultaneous method, two non-simultaneous methods were also proposed for comparison. Furthermore, thorough simulations under different noise levels and various robot movements were carried out for both simultaneous and non-simultaneous methods. Experiments on real robots were also performed to evaluate the proposed simultaneous method. The comparison results from both simulations and experiments demonstrated the superior accuracy and efficiency of the simultaneous method.

Problem Formulation

Measurement Data:
Homogeneous transformations from the robot bases to end-effector (A and C), and from tracker to marker (B).
Unknowns:
Homogeneous transformations from one robot base frame to another (Y), and from eye/tool to robot hand/flange (X and Z).
The measurable data A, B and C, and the unknowns X, Y and Z form a transformation loop which can be formulated as, AXB=YCZ (1).
problem

Fig. 1: The relevance and differences among the problem defined in this paper and the other two classical problems in robotics. Our problem formulation can be considered as a superset of the other two.

Approaches

Non-simultaneous Methods

3-Step Method
In the non-simultaneous 3-Step method, the X and Z in (1) are separately calculated as two hand-eye/tool-flange calibrations which can be represented as an AX = XB problem in the first and second steps. This results in two data acquisition procedures, in which the two manipulators carry out at least two rotations whose rotational axes are not parallel or anti-parallel by turns while the other one being kept immobile. The last unknown robot-robot relationship Y could be solved directly using the previously retrieved data by the method of least squares.
2-Step Method
The non-simultaneous 2-Step method formulates the original calibration problem in successive processes which solve AX = XB firstly, and then the AX = YB. The data acquisition procedures and obtained data are the same with the 3-Step method. In contrast to solving robot-robot relationship independently, the 2-Step method solves tool-flange/hand-eye and robot-robot transforms in an AX = YB manner in the second step. This is possible because equation AXB = YCZ can be expressed as (AXB)inv(Z) = YC, which is in an AX = YB form with the solution of X known.

Simultaneous Method

Non-simultaneous methods face a problem of error accumulation, since in these methods the latter steps use the previous solutions as input. As a result, the inaccuracy produced in the former steps will accumulate to the subsequent steps. In addition to accuracy, it is preferred that the two robots participating the calibration procedure simultaneously, which will significantly save the total time required.
In regards to this, a simultaneous method is proposed to improve the accuracy and efficiency of the calibration by solving the original AXB = YCZ problem directly. During the data acquisition procedure, the manipulators simultaneously move to different configurations and the corresponding data set A, B and C are recorded. Then the unknown X, Y and Z are solved simultaneously.

Evaluations

Simulations

To illustrate the feasibility of the proposed methods, intensive simulations have been carried out under different noise situations and by using different numbers of data sets.
simulation

Fig. 2: A schematic diagram which shows the experiment setup consisting of two Puma 560 manipulators, a tracking sensor and a target marker to solve the hand-eye, tool-flange and robot-robot calibration problem.

Simulations Results

For the rotational part, the three methods perform evenly in the accuracy of Z. However, the simultaneous method slightly outperforms in the accuracy of X and significantly in the accuracy of Y than the other two non-simultaneous methods. The results of the translational part are similar to the rotational ones. For the solution of Z, the accuracy of the simultaneous method is as good as the 3-Step method but slightly worse than the 2-Step method. However, the simultaneous method achieves a significantly improvement in the accuracy of X and Y compared to the other two methods.
 
simulation1
simulation2
simulation3

Experiments Results

Besides the simulation, ample real experiments have been conceived and carried out under different configurations to evaluate the proposed methods. As shown in Fig. 6, the experiments involved a Staubli TX60 robot (6 DOFs, averaged repeatability 0.02mm), a Barrett WAM robot (4 DOFs, averaged repeatability 0.05mm) and a NDI Polaris optical tracker (RMS repeatability 0.10mm). The optical tracker was mounted to the last link of the Staubli robot, referred to as sensor robot. The corresponding reflective marker was mounted to the last link of the WAM robot, referred to as marker robot.
experiment

Fig. 6: The experiment is carried out by using a Staubli TX60 robot and a Barrett WAM robot. A NDI Polaris optical tracker is mounted to the Staubli robot to track a reflective marker (invisible from current camera angle) that is mounted to the WAM robot.

To demonstrate the superiority of the simultaneous method in the real experimental scenarios, a 5-fold cross-validation approach is implemented for 200 times for all the calibration methods under all system configurations. For simultaneous method, after data alignment and RANSAC processing, 80% of the remaining data are randomly selected to calculate unknown X, Y, and Z, and 20% are used as test data to evaluate the performance. For 2-Step and 3Step methods, after calculating the unknowns by each method, same test data from the simultaneous method are used to evaluate their performances.

In Fig. 7, the evaluated errors of 200 times 5-fold cross-validation for three proposed methods at three ranges are shown as box plots. Left-tail paired-samples t-tests have been carried out to compare the performances of simultaneous method versus 2-Step and 3-Step methods, respectively. The results indicate that the rotational and translational errors from the simultaneous method are very significantly smaller than the 2-Step and 3-Step methods. Only two non-significant results exist in the rotational performances at medium and far ranges when comparing the simultaneous method with the 3-Step one. Nevertheless, the simultaneous method outperforms the non-simultaneous ones for translation error at all ranges.

experiment1

Fig. 7: Results of 200 times 5-fold cross-validation and left-tail paired-samples t-test at the near, medium and far ranges. The box plots show the rotational and translational error distributions for three methods at three ranges. **, * and N.S. stands for very significant at 99% confidence level, significant and non-significant at 95% confidence level.

Related Publications

1. Liao Wu, Jiaole Wang, Max Q.-H. Meng, and Hongliang Ren, imultaneous Hand-Eye, Tool-Flange and Robot-Robot Calibration for Multi-robot Co-manipulation by Solving AXB = YCZ Problem, Robotics, IEEE Transactions on (Conditionally accepted)
2. Jiaole Wang, Liao Wu and Hongliang Ren, Towards simultaneous coordinate calibrations for cooperative multiple robots, Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on. IEEE, 2014: 410-415.

Institute & People Involved

The Chinese University of Hong Kong (CUHK): Jiaole Wang, Student Member, IEEE; Max Q.-H. Meng, Fellow, IEEE
National University of Singapore (NUS): Liao Wu; Hongliang Ren, Member, IEEE

Videos

-Calibration Experiments