Force/Torque Sensor for Tele-operated Catheterization Procedures

Abstract

A tele-operated robotic catheterization system can significantly alleviate the surgeons from radiation exposure and fatigue resulted from long standing time with protective suits. Proximal force/torque signals imply the critical information about the contact forces between the catheter and its surrounding structures. This paper presents a compact, cost-effective force and torque sensing device suitable for catheterization procedures to measure the proximal force/torque signals of the input catheter. The device consists of a rotatable and linear retractable mechanism, a laser mouse sensor, and a coil spring. As the stretched, compressed, and twisted values vary due to the sliding joint, the force and torque signals can be computed based on the Hooke’s law. The proposed sensing device has many advantages such as cost-effective, easily miniaturized and customized, and can be extended to the MRI compatible sensors. The experimental results with step response and time-varying loads by comparing to an ATI Nano17 force/torque sensor show that the Root Mean Squared Error (RMSE) for force and torque measurement are 0.042 N and 0.228 mNm respectively.

Video PPT demo

Publications

J. Guo; M. Li; P. Ho & H. Ren Design and Performance Evaluation of a Force/Torque Sensor for Tele-operated Catheterization Procedures IEEE Sensors Journal, 2016, PP, 1-8

Data-driven Learning Intelligent Control for Flexible Surgical Manipulators

Abstract

Automate Surgical Tasks for A Flexible Serpentine Manipulator via Learning Actuation Space Trajectory from Demonstration

Background: Accurate motion control of flexible surgical manipulators is crucial in tissue manipulation tasks. Tendon-driven serpentine manipulator (TSM) is one of the most widely adopted flexible mechanisms in MIS for its enhanced maneuverability in torturous environment. TSM, however, exhibits high nonlinearities and conventional analytical kinematics model is insufficient to achieve high accuracy.
Methods: To account for the system nonlinearities, we applied data driven approach to encode the system inverse kinematics. Three regression methods: Extreme Learning Machine (ELM), Gaussian Mixture Regression (GMR) and K-Nearest Neighbors Regression (KNNR) were implemented to learn a nonlinear mapping from the robot 3D position state to the control inputs.
Results: The performance of the three algorithms were evaluated both in simulation and physical trajectory tracking experiments. KNNR performs the best in the tracking experiments with the lowest RMSE of 2.1275mm.
Conclusions: The proposed inverse kinematics learning methods provide an alternative and efficient way to accurately model the challenging tendon driven flexible manipulator.
Keywords: Tendon-driven serpentine manipulator; surgical robotics; Inverse kinematics; Heuristic Methods

Demo video at:

Publications

  • W. Xu; J. Chen; H. Y. Lau & H. Ren Data-driven Methods towards Learning the Highly Nonlinear Inverse Kinematics of Tendon-driven Surgical Manipulators International Journal of Medical Robotics and Computer Assisted Surgery , 2016, 1-13
  • W. Xu; J. Chen; H. Y. Lau & H. Ren Automate Surgical Tasks for A Flexible Serpentine Manipulator via Learning Actuation Space Trajectory from Demonstration ICRA2016, IEEE International Conference on Robotics and Automation, 2016, –

Motion Planning of Flexible Manipulators by Learning from Human Expert Demonstrations

Abstract

Motion Planning of Multiple-segment flexible soft, and continuum Flexible Manipulators by Learning from Human Expert Demonstrations

Multiple-segment flexible and soft robotic actuators exhibit compliance but suffer from the difficulty of path planning due to their redundant degrees of freedom, although they are promising in complex tasks such as crossing body cavities to grasp objects. We propose a learning from demonstration method to plan the motion paths of flexible manipulators, by statistics machine-learning algorithms. To encode demonstrated trajectories and estimate suitable paths for the manipulators to reproduce the task, models are built based on Gaussian Mixture Model and Gaussian Mixture Regression respectively. The forward and inverse kinematic models of soft robotic arm are derived for the motion control. A flexible and soft robotic manipulator verifies the learned paths by successfully completing a representative task of navigating through a narrow keyhole.


 

Demo video at:

 

Publications

  • H. Wang; J. Chen; H. Y. Lau & H. Ren Motion Planning of IPMC Flexible Manipulators by Learning from Human Expert Demonstrations ICRA2016, IEEE International Conference on Robotics and Automation, 2016
  • J. Chen; H. REN & A. Lau Learning Reaching Movement Primitives from Human Demonstrations with Gaussian Mixture Regression and Stabilized Dynamical Systems International Conference on Control Science and Systems Engineering ICCSSE 2016, 2016
  • J. Chen; W. Xu; A. Lau & H. REN Towards Transferring Skills to Flexible Surgical Robots with Programming by Demonstration and Reinforcement Learning The Eighth International Conference on Advanced Computational Intelligence (ICACI2016), 2016
  • J. Chen; W. Xu; H. Ren & H. Y. Lau Automate Adaptive Robot Reaching Movement Based on Learning from Human Demonstrations with Dynamical Systems ROBIO2016, 2016

Ultrasound Assisted Guidance with Force Cues for Intravascular Interventions

Project Goals


Image guidance during minimally invasive cardiovascular interventions is primarily achieved based on X-ray fluoroscopy, which has several limitations including limited 3D imaging, significant doses of radiation to operators, and lack of contact force measurement between the cardiovascular anatomy and interventional tools. Ultrasound imaging may complement or possibly replace 2D fluoroscopy for intravascular interventions due to its portability, safety, and the ability of providing depth information. However, it is a challenging work to perfectly visualize catheters and guidewires in the ultrasound images. In this paper, we developed a novel method to locate the position and orientation of the catheter tip in 2D ultrasound images in real time by detecting and tracking a passive marker attached to the catheter tip. Moreover, the contact force can also be measured due to the length variation of the marker in real time. An active geometrical structure model based method was proposed to detect the initial position of the marker, and a KLT (Kanade-Lucas-Tomasi) based algorithm was developed to track the position, orientation, and the length of the marker. The ex vivo experimental results indicate that the proposed method is able to automatically locate the catheter tip in the ultrasound images and sense the contact force, so as to facilitate the operators’ work during intravascular interventions.

Approaches/Results/Video

People Involved

Research Fellow: Jin Guo
Project Investigator: Hongliang Ren

Related Publications

TBA

Tracking Magnetic Particles under Ultrasound Imaging using Contrast-Enhancing Microbubbles

Abstract

Magnetic microbubbles which can be controlled by an external magnetic field have been explored as a method for precise and efficient drug delivery. In this paper, a technique for the fabrication of microbubble encapsulated magnetic spheres is presented. The resultant magnetic spheres were subsequently imaged using ultrasound and the encapsulated microbubbles proved to appear as bright spots and resulted in enhanced ultrasound image contrast, as compared to the solid magnetic spheres which appeared dull. A tracking algorithm was then developed for the tracking of the magnetic microbubbles based on optical flow tracking. Further development of the magnetic microbubbles and tracking algorithm can lead to future use of the tracking algorithm in the case of in vivo injection of the magnetic microbubbles.

Publications

1. Loh Kai Ting, Ren Hongliang and Li Jun, Tracking Magnetic Particles under Ultrasound Imaging using Contrast-Enhancing Microbubbles, The 11th Asian Conference on Computer Aided Surgery, 2015.

Poster in BME Showcase 2015

KT Poster Final Printed

Tele-Operation and Active Visual Servoing of a Compact Portable Continuum Tubular Robot

Demo Videos

– tele-operation, visual servoing and hybrid control (Summery)

– EIH VS in free space – EIH VS inside a skull model

– Eye-to-hand Visual Servoing

Project goals

Trans-orifice minimally invasive procedures have received more and more attention because of the advantages of lower infection risks, minimal scarring and shorter recovery time. Due to the ability of retaining force transmission and great dexterity, continuum tubular robotic technology has gained ever increasing attention in minimally invasive surgeries. The objective of the project is to design a compact and portable continuum tubular robot for transnasal procedures. Several control modes of the robot, including tele-operation, visual servoing and hybrid control, have been proposed so that the robot is allowed to accomplish different tasks in constrained surgical environments.

Approaches

Driven by the need for compactness and portability, we have developed a continuum tubular robot which is 35 cm in length, 10 cm in diameter, 2.15 kg in weight, and easy to be integrated with a micro-robotic arm to perform complicated operations as shown in Fig. 1. Comprehensive studies of both the kinematics and workspace of the prototype have been carried out.
figsetup-2jbxtci
Fig. 1. Prototype of the proposed continuum tubular robot.
The workspace varies on different configuration of DOFs as well as the initial parameters of the tube pairs. The outer tubes in the following cases are all assumed to be dominating the inner tubes in stiffness. Calculation of the workspace relies on the forward kinematics of the robot, and considers the motion constraints imposed by the structure. The workspaces of the 4-DOF robot with three different initial configurations are compared in Fig. 2 (Left). Since the spatial workspace has rotational symmetry, only the sectional workspace is displayed.
fig2workspace         fig3-3dof
Fig. 2. Workspace comparison for 4-DOF CTR (Left) & 3-DOF CTR (Right) with three initial configurations. Top: all the outstretched part of the inner tube exposes; Middle: the outstretched part of the inner tube is partially covered by the outer tube; Bottom: the outstretched part of the inner tube is totally covered by the outer tube.
When the outer tube is straight, its rotation does not change either the position or the orientation of the tip. In this case, the robot degenerates to a 3-DOF one. Although decrease of DOF will weaken the dexterity of the robot, this configuration has its own advantages in some surgical applications. Take transnasal surgeries for example, as the nostril passage is generally straight, an unbent outer tube will facilitate the robot to get through at the beginning. Similar analysis of workspace is performed on the 3-DOF CTR with three different initial configurations, as shown in Fig. 2 (right). With different initial configuration, the workspaces also present different shapes.
In addition, tele-operation of the robot is achieved using a haptic input device developed for 3D position control. A novel eye-in-hand active visual servoing system is also proposed for the robot to resist unexpected perturbations automatically and deliver surgical tools actively in constrained environments. Finally, a hybrid control strategy combining teleoperation and visual servoing is investigated. Various experiments are conducted to evaluate the performance of the continuum tubular robot and the feasibility and effectiveness of the proposed tele-operation and visual servoing control modes in transnasal surgeries.

Related Publications

1. Liao Wu, Keyu Wu, Li Ting Lynette Teo, and Hongliang Ren, “Tele-Operation and Active Visual Servoing of a Compact Portable Continuum Tubular Robot in Constrained Environments”, Mechatronics, IEEE/ASME Transactions on (submitted)
2. Keyu Wu, Liao Wu and Hongliang Ren, “An Image Based Targeting Method to Guide a Tentacle-like Curvilinear Concentric Tube Robot”, ROBIO 2014, IEEE International Conference on Robotics and Biomimetics, 2014.

People involved

Staff: Liao Wu
Student: Keyu Wu
PI: Hongliang Ren

Deprecated videos FYI only

– tele-operation, visual servoing and hybrid control

– Eye-to-hand Visual Servoing

– Eye-in-hand Visual Servoing in free space

–Tele-operation of the compact tubular robot

– Eye-in-hand Visual Servoing inside a skull model

Simultaneous Hand-Eye, Tool-Flange and Robot-Robot Calibration for Co-manipulators by Solving AXB=YCZ Problem

Abstract

Multi-robot co-manipulation shows great potential to address the limitations of using single robot in complicated tasks such as robotic surgeries. However, the dynamic setup poses great uncertainties in the circumstances of robotic mobility and unstructured environment. Therefore, the relationships among all the base frames (robot-robot calibration) and the relationships between the end-effectors and the other devices such as cameras (hand-eye calibration) and tools (tool-flange calibration) have to be determined constantly in order to enable robotic cooperation in the constantly changing environment. We formulated the problem of hand-eye, tool-flange and robot-robot calibration to a matrix equation AXB=YCZ. A series of generic geometric properties and lemmas were presented, leading to the derivation of the final simultaneous algorithm. In addition to the accurate iterative solution, a closed-form solution was also introduced based on quaternions to give an initial value. To show the feasibility and superiority of the simultaneous method, two non-simultaneous methods were also proposed for comparison. Furthermore, thorough simulations under different noise levels and various robot movements were carried out for both simultaneous and non-simultaneous methods. Experiments on real robots were also performed to evaluate the proposed simultaneous method. The comparison results from both simulations and experiments demonstrated the superior accuracy and efficiency of the simultaneous method.

Problem Formulation

Measurement Data:
Homogeneous transformations from the robot bases to end-effector (A and C), and from tracker to marker (B).
Unknowns:
Homogeneous transformations from one robot base frame to another (Y), and from eye/tool to robot hand/flange (X and Z).
The measurable data A, B and C, and the unknowns X, Y and Z form a transformation loop which can be formulated as, AXB=YCZ (1).
problem

Fig. 1: The relevance and differences among the problem defined in this paper and the other two classical problems in robotics. Our problem formulation can be considered as a superset of the other two.

Approaches

Non-simultaneous Methods

3-Step Method
In the non-simultaneous 3-Step method, the X and Z in (1) are separately calculated as two hand-eye/tool-flange calibrations which can be represented as an AX = XB problem in the first and second steps. This results in two data acquisition procedures, in which the two manipulators carry out at least two rotations whose rotational axes are not parallel or anti-parallel by turns while the other one being kept immobile. The last unknown robot-robot relationship Y could be solved directly using the previously retrieved data by the method of least squares.
2-Step Method
The non-simultaneous 2-Step method formulates the original calibration problem in successive processes which solve AX = XB firstly, and then the AX = YB. The data acquisition procedures and obtained data are the same with the 3-Step method. In contrast to solving robot-robot relationship independently, the 2-Step method solves tool-flange/hand-eye and robot-robot transforms in an AX = YB manner in the second step. This is possible because equation AXB = YCZ can be expressed as (AXB)inv(Z) = YC, which is in an AX = YB form with the solution of X known.

Simultaneous Method

Non-simultaneous methods face a problem of error accumulation, since in these methods the latter steps use the previous solutions as input. As a result, the inaccuracy produced in the former steps will accumulate to the subsequent steps. In addition to accuracy, it is preferred that the two robots participating the calibration procedure simultaneously, which will significantly save the total time required.
In regards to this, a simultaneous method is proposed to improve the accuracy and efficiency of the calibration by solving the original AXB = YCZ problem directly. During the data acquisition procedure, the manipulators simultaneously move to different configurations and the corresponding data set A, B and C are recorded. Then the unknown X, Y and Z are solved simultaneously.

Evaluations

Simulations

To illustrate the feasibility of the proposed methods, intensive simulations have been carried out under different noise situations and by using different numbers of data sets.
simulation

Fig. 2: A schematic diagram which shows the experiment setup consisting of two Puma 560 manipulators, a tracking sensor and a target marker to solve the hand-eye, tool-flange and robot-robot calibration problem.

Simulations Results

For the rotational part, the three methods perform evenly in the accuracy of Z. However, the simultaneous method slightly outperforms in the accuracy of X and significantly in the accuracy of Y than the other two non-simultaneous methods. The results of the translational part are similar to the rotational ones. For the solution of Z, the accuracy of the simultaneous method is as good as the 3-Step method but slightly worse than the 2-Step method. However, the simultaneous method achieves a significantly improvement in the accuracy of X and Y compared to the other two methods.
 
simulation1
simulation2
simulation3

Experiments Results

Besides the simulation, ample real experiments have been conceived and carried out under different configurations to evaluate the proposed methods. As shown in Fig. 6, the experiments involved a Staubli TX60 robot (6 DOFs, averaged repeatability 0.02mm), a Barrett WAM robot (4 DOFs, averaged repeatability 0.05mm) and a NDI Polaris optical tracker (RMS repeatability 0.10mm). The optical tracker was mounted to the last link of the Staubli robot, referred to as sensor robot. The corresponding reflective marker was mounted to the last link of the WAM robot, referred to as marker robot.
experiment

Fig. 6: The experiment is carried out by using a Staubli TX60 robot and a Barrett WAM robot. A NDI Polaris optical tracker is mounted to the Staubli robot to track a reflective marker (invisible from current camera angle) that is mounted to the WAM robot.

To demonstrate the superiority of the simultaneous method in the real experimental scenarios, a 5-fold cross-validation approach is implemented for 200 times for all the calibration methods under all system configurations. For simultaneous method, after data alignment and RANSAC processing, 80% of the remaining data are randomly selected to calculate unknown X, Y, and Z, and 20% are used as test data to evaluate the performance. For 2-Step and 3Step methods, after calculating the unknowns by each method, same test data from the simultaneous method are used to evaluate their performances.

In Fig. 7, the evaluated errors of 200 times 5-fold cross-validation for three proposed methods at three ranges are shown as box plots. Left-tail paired-samples t-tests have been carried out to compare the performances of simultaneous method versus 2-Step and 3-Step methods, respectively. The results indicate that the rotational and translational errors from the simultaneous method are very significantly smaller than the 2-Step and 3-Step methods. Only two non-significant results exist in the rotational performances at medium and far ranges when comparing the simultaneous method with the 3-Step one. Nevertheless, the simultaneous method outperforms the non-simultaneous ones for translation error at all ranges.

experiment1

Fig. 7: Results of 200 times 5-fold cross-validation and left-tail paired-samples t-test at the near, medium and far ranges. The box plots show the rotational and translational error distributions for three methods at three ranges. **, * and N.S. stands for very significant at 99% confidence level, significant and non-significant at 95% confidence level.

Related Publications

1. Liao Wu, Jiaole Wang, Max Q.-H. Meng, and Hongliang Ren, imultaneous Hand-Eye, Tool-Flange and Robot-Robot Calibration for Multi-robot Co-manipulation by Solving AXB = YCZ Problem, Robotics, IEEE Transactions on (Conditionally accepted)
2. Jiaole Wang, Liao Wu and Hongliang Ren, Towards simultaneous coordinate calibrations for cooperative multiple robots, Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on. IEEE, 2014: 410-415.

Institute & People Involved

The Chinese University of Hong Kong (CUHK): Jiaole Wang, Student Member, IEEE; Max Q.-H. Meng, Fellow, IEEE
National University of Singapore (NUS): Liao Wu; Hongliang Ren, Member, IEEE

Videos

-Calibration Experiments
 

Tip Tracking and Shape Sensing for Flexible Surgical Robots

Project Goals

As a typical minimally invasive surgery, transoral surgery brings to patients significant benefits such as decreased intra-operative blood loss, less post-operative complication morbidity, shorter hospitalization length and recovery period. Flexible surgical robot (such as tendon/wire/cable-driven robot and concentric tube robot) is an efficient device for transoral inspection and diagnosis. It can work well in complicated and confined environments. One drawback of this method is that the real time tip position and shape information cannot be well estimated, especially when there is payload on the end effector. To address these challenges, we focus on a novel tip tracking and shape sensing method for the flexible surgical robot.

Approaches

The proposed method is based on the positional and directional information of limited specific joints of the robot, which are estimated with an electromagnetic tracking system. Electromagnetic sensors have been mounted in the tip of the robot to provide the tip position and direction information. Based on the section number of the robot, some other sensors will be mounted in the specific position of the robot to realize the shape sensing. The shape sensing method is based on multi quadratic Bézier curves.
Fig1electromagnetic

Fig.1 Electromagnetic tracking method.

The electromagnetic tracking method is shown in Fig.1. A uniaxial sensing coil is used as the target and sensing the magnetic field that generated by the six transmitting coils. These six coils are stimulated sequentially. The position and orientation information of the sensing coil can then be estimated based on the sensing signals.
Fig2shapesensing

Fig.2 Shape sensing method.

Fig.2 shows the shape sensing method for multi-section flexible by using multi quadratic Bézier curves. For a N sections robot, ⌈N/2⌉ electromagnetic sensors will be mounted in the tail of the (N-2k)th section, where 0≤k<n/2. Therefore, by utilizing the positional and directional information of the sensors, each section can be reconstructed based on a quadratic Bezier curve. Compared to the image based method, this method is easy to setup; compared to the FBG based method, curvature information is not used and fewer sensors are needed in the proposed method.

Results and Remarks

Fig3twouniaxial

Fig.3 Two Uniaxial electromagnetic sensing coils are mounted in both ends of the device.

We have applied the method on a 10-joints wire-driven flexible robot. As shown in Fig.3, two uniaxial electromagnetic sensors (Aurora Shielded and Isolated 5DOF Sensor, 0.9* 12mm) have been mounted on both ends of the robot. Fig.4 shows the average errors of the experimental results of each S shape curve reconstruction in the experiments. The whole average error is 1.4mm.
We have also applied the method on a two-section concentric tube. As shown in Fig.5, a uniaxial sensor has been mounted in the tip of the robot. The tracking results can be seen in the video.
Fig4wiredriven

Fig.4 Experimental results for the wire-driven flexible robot.

Fig5tiptracking

Fig.5 Tip tracking and shape sensing for concentric tube robot. The result can be seen in the video below.

The primary contributions of our work are summarized as follows:
1)A shape sensing method based on Bézier curve fitting and electromagnetic tracking is proposed. This method needs only the positional and directional information of some specific position of the curved robot.
2)Only limited sensors are needed, and thus very few modifications are required on the robot.
3)Compared with other methods, the proposed method is easy to set up and has a good accuracy.

People Involved

Staff: Shuang Song, Zheng Li
Investigators: Hongliang Ren, Haoyong Yu

Video

Publications

[1] Shuang Song, Wan Qiao, Baopu Li, Chao Hu, Hongliang Ren and Max Meng. “An Efficient Magnetic Tracking Method Using Uniaxial Sensing Coil”. Magnetics, IEEE Transactions on, 2014. 50(1), Article#: 4003707
[2] Shuang Song, Hongliang Ren and Haoyong Yu. “An Improved Magnetic Tracking Method using Rotating Uniaxial Coil with Sparse Points and Closed Form Analytic Solution”. IEEE Sensors Journal, 14(10): 3585-3592, 2014
[3] Shuang Song, Baopu Li, Wan Qiao, Chao Hu, Hongliang Ren, Haoyong Yu, Qi Zhang, Max Q.-H. Meng and Guoqing Xu. “6-D Magnetic Localization and Orientation Method for an Annular Magnet Based on a Closed-Form Analytical Model”. IEEE Transactions on Magnetics. 2014, 50(9), Article#: 5000411

ETH Image Based Visual Servoing to Guide Flexible Robots

Video Demo

Eye-To-Hand Image Based Visual Servoing to Guide Flexible Robots

Project goals

Flexible robots including active cannula or cable driven continuum robots are typically suitable for such minimally invasive surgeries because they are able to present various flexible shapes with great dexterity, which strengthens the ability of collision avoidance and enlarges the reachability of operation tools. Using model based control method will lead to artificial singularities and even inverted mapping in many situations because the models are usually developed in free space and cannot perform effectively in constrained environments. Therefore, the goal of this project is control the motion of a tentacle-like curvilinear concentric tube robot by model-less visual servoing.

Approaches

A two-dimensional planar manipulator is constructed by enabling only the three translation inputs of a six DOF concentric tube robot. As shown in Fig. 1, the concentric tube manipulator is controlled using a PID controller and the images captured by an uncalibrated camera are used as visual feedback.
Fig1setup

Fig. 1. The experimental setup includes a concentric tube robot, a camera, a laptop, a marker and a target.

The visual tracking of the concentric tube robot is based on shape detection. The circular marker is attached to the tip of the concentric tube robot and a square target is given for the tip to trace. During the experiments, the coordinates of the marker centroid and target centroid are calculated while the next target position is calculated at the same time as shown in Fig. 2.
Fig2workingmechanism

Fig. 2. Working mechanism of the system. Top: translations of the three tubes. Bottom: marker, final target and the next target position on the image plane.

Fig3overview

Fig. 3. Overview of the control algorithm. The Jacobian matrix is estimated based on the measurements of each incremental movement detected from the camera.

The framework of the controlling the robot is shown in Fig. 3. The initial Jacobian matrix is acquired by running each individual motor separately and measuring the change of tip position of the robot in the image space. Then the optimal control is achieved by solving a typical redundant inverse kinematics. And finally the Jacobian matrix is continuously estimated based on the measured displacements.

Results

To evaluate the proposed model-less algorithm, a simulation was carried out on MATLAB first. The desired and actual trajectory was shown in Fig. 4, from which it could be seen that the robot succeeded in following the reference trajectory and reaching the target position.
Fig4simulationcrt

Fig. 4. Simulation of using the proposed algorithm to control a concentric tube robot.

The proposed algorithm was also implemented on a physical concentric tube robot in free space. It was found the robot was able to reach goal with zero steady state error in all trials as shown in Fig. 5.
Fig5experiments

Fig. 5. The concentric tube robot is able to reach a desired goal using the proposed method. Top: the motion of the robot. Bottom: the reference and actual trajectories of two experiments.

People involved

Staff: Keyu WU, Liao WU
PI: Hongliang REN

Publications

1. Keyu Wu, Liao Wu and Hongliang Ren, “An Image Based Targeting Method to Guide a Tentacle-like Curvilinear Concentric Tube Robot”, ROBIO 2014, IEEE International Conference on Robotics and Biomimetics, 2014.

Surgical Tracking Based on Stereo Vision and Depth Sensing

Project Goals:

The objective of this research is to incorporate multiple sensors at broad spectrum, including stereo infrared (IR) cameras, color (or RGB) cameras and depth sensors to perceive the surgical environment. Features extracted from each modality can contribute to the cognition of complex surgical environment or procedures. Additionally, their combination can provide higher robustness and accuracy beyond what is obtained from single sensing modality. As a preliminary study, we propose a multi-sensor fusion approach for localizing surgical instruments. We developed an integrated dual Kinect tracking system to validate the proposed hierarchical tracking approach.

Approaches:

This project considers the problem of improving the surgical instrument tracking accuracy by multi-sensor fusion technique in computer vision. We proposed a hierarchical fusion algorithm for integrating the tracking results from depth sensor, IR camera pair and RGB camera pair. Fig. 1 summarized the algorithm involved in this project. It can be divided into the “low-level” and the “high-level” fusion.

Fig. 1 Block diagram of hierarchical fusion algorithm.


The low-level fusion is to improve the speed and robustness of marker feature extraction before triangulating the tool tip position in IR and RGB camera pair. The IR and RGB camera are modeled as pin-hole cameras.  The depth data of the tool can be used as a priori for marker detection. The working area of the tracking tool is supposed to be limited in a reasonable volume v(x, y, z) that can be used to refine the search area for feature extraction, which could reduce the computational cost for real-time applications.
The high-level fusion is to reach a highly accurate tracking result by fusing two measurements. We employ the covariance intersection (CI) algorithm to estimate a new tracking result with less covariance.

Experiments

To demonstrate the proposed algorithm, we designed a hybrid marker-based tracking tool (Fig. 2) that incorporates the cross-based feature in visible modality and retro-reflective marker based feature in infra-red modality to get a fused tracking of the customized tool tip. To evaluate the performance of the proposed method, we employ two Kinects to build the experimental setup. Fig. 3 shows the prototype of multi-sensor fusion tracker for the experiment, which indicates that the CI-based fusion approaches obviously tend to be better than the separate IR tracker or RGB tracker.  The mean error and deviation of the fusion algorithm are all improved.
Hybrid marker

Fig. 3 Dual Kinect tracking system

People Involved

Staffs: Wei LIU, Shuang SONG, Andy Lim
Advisor: Dr. Hongliang Ren
Collaborator: Wei ZHANG

Publications

[1] Ren, H.; LIU, W. & LIM, A. Marker-Based Instrument Tracking Using Dual Kinect Sensors for Navigated Surgery IEEE Transactions on Automation Science and Engineering, 2013
[2] Liu, W.; Ren, H.; Zhang, W. & Song, S. Cognitive Tracking of Surgical Instruments Based on Stereo Vision and Depth Sensing, ROBIO 2013, IEEE International Conference on Robotics and Biomimetics, 2013

Related FYP Project

Andy Lim: Marker-Based Surgical Tracking With Multiple Modalities Using Microsoft Kinect

References

[1] H. Ren, D. Rank, M. Merdes, J. Stallkamp, and P. Kazanzides, “Multi-sensor data fusion in an integrated tracking system for endoscopic surgery,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 1, pp. 106 – 111, 2012.
[2] W. Liu, C. Hu, Q. He, and M.-H. Meng, “A three-dimensional visual localization system based on four inexpensive video cameras,” in Information and Automation (ICIA), 2010 IEEE International Conference on. IEEE, 2010, pp. 1065–1070.
[3] F. Faion, S. Friedberger, A. Zea, and U. D. Hanebeck, “Intelligent sensor-scheduling for multi-kinect-tracking,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 3993–3999.