Simultaneous Robot-World, Sensor-Tip, and Kinematics Calibration of an Underactuated Robotic Hand with Soft Fingers


Simultaneous Robot-World, Sensor-Tip, and Kinematics Calibration of an Underactuated Robotic Hand with Soft Fingers

Ning Tan, Xiaoyi Gu, and Hongliang Ren Senior Member, IEEE

Abstract—Soft robotics is a research field growing rapidly with primary focuses on the prototype design, development of soft robots and their applications. Due to their highly deformable features, it is difficult to model and control such robots in a very precise compared with conventional rigid structured robots. Hence, the calibration and parameter identification problems of an underactuated robotic hand with soft fingers are important, but have not been investigated intensively. In this paper, we present a comparative study on the calibration of a soft robotic hand. The calibration problem is framed as an AX=YB problem with the partially known matrix A. The identifiability of the parameters is analyzed, and calibration methods based on nonlinear optimization (i.e., L-M method and interior-point method) and evolutionary computation (i.e., differential evolution) are presented. Extensive simulation tests are performed to examine the parameter identification using the three methods in a comparative way. The experiments are conducted on the real soft robotic-hand setup. The fitting, interpolating, and extrapolating errors are presented as well.

Index Terms—Soft robotics, calibration and identification, robotic hand, AX=YB, hand-eye calibration, tendon-driven robot

Body-Attached Soft Robot for Ultrasound Imaging

Project Goals

Ultrasound imaging procedures are deemed as one of the most convenient and least invasive medical diagnostic imaging modalities and have been widely utilized in health care providers, which are expecting semiautomatic or fully-automatic imaging systems to reduce the current clinical workloads. This paper presents a portable and wearable soft robotic system which has been designed with the purpose of replacing the manual operation to cooperatively steer the ultrasound probe. This human-compliant soft robotic system, which is equipped with four separated parallel soft pneumatic actuators and is able to achieve movements in three directions. Vacuum suction force is introduced to attach the robot onto the intended body location. The design and fabrication of this soft robotic system are illustrated. To our knowledge, this is the first body-attached soft robot for compliant ultrasound imaging. The feasibility of the system is demonstrated through proof-of-concept experiments.


Developing a wearable soft robotic system (Figure 1), which is capable of mimicking the procedure of probe steering and optimizing the contact force and angle according to the specific conditions, has great significance of reducing the reliance of the ultrasound imaging on the experience of operators and obtaining images with high quality.

People Involved

PhD Student: Xiaoyi Gu
FYP Student: Koon Lin Tan
Project Investigator: Hongliang Ren

Related Publications

Ren, H.; Gu, X. ; Tan, K. L. Human-Compliant Body-Attached Soft Robots Towards Automatic Cooperative Ultrasound Imaging 2016 20th IEEE International Conference on Computer Supported Cooperative Work in Design (CSCWD 2016), IEEE, 2016, –

A compact continuum tubular robotic system for transnasal procedures


[kad_youtube url=”” ]

Project Goals

Nasopharynx cancer, or nasopharyngeal carcinoma (NPC), is a tumor that originates in the nasopharynx, the uppermost region of the pharynx where the nasal passage and the throat join. It is a common disease occurring to ethnic Chinese people living in or emigrating from southern China; it is also the eighth most frequently occurred cancer among Singaporean men. Traditional posterior nasopharyngeal biopsy using a flexible nasal endoscope has the risks of abrasion and injury to the nasal mucosa and thus causing trauma to the patient. Therefore, the goal of this project is to develop a compact continuum tubular robotic system to achieve collision free nasopharyngeal biopsy.


Fig.1  Illustration of the proposed CTR for nasopharyngeal biopsy.


We developed a compact CTR which is 35 cm in total length, 10 cm in diameter, 2.15 kg in weight, and easy to be integrated with a robotic arm to perform more complicated operations.


Fig.2 The proposed continuum tubular robot


Fig.3 Compact and light weight CTR integrated with a positioning arm for
better conducting surgery

We also developed a 3D printed biopsy needle to equip our robot for transnasal biopsy procedure.

Fig.4  3D printed biopsy needle for transnasal biopsy

The workspace of the robot was analyzed to determine optimized tube parameters.


Fig.5 Workspace comparison for 3-DOF CTR with three initial configurations.
Top: all the outstretched part of the inner tube exposes; Middle: the outstretched part of the inner tube is partially covered by the outer tube; Bottom: the outstretched part of the inner tube is totally covered by the outer tube.

Further more, by using an electromagnetic tracking system, we are able to build a navigation system with shape reconstruction for the tubes.


Fig.6  Shape reconstruction using 3-order Bézier curve fitting


Fig.7 Sensing by EM tracker


Fig.8 Navigation interface


Three groups of experiments were carried out. The first group is to tele-operate the robot to follow a linear path and a circular path. We found that the path following accuracy was about 2 mm.


Fig.9 Tele-operating the robot to follow a linear path and a circular path


Fig.10 Accuracy of the robot following the predefined paths

The second group is to validate the shape reconstruction algorithm. The accuracy of the results is about 1 mm.


Fig.11 Reconstruction setup


Fig.12 Reconstruction error

In the last group of experiments, the robot was tested in a biopsy procedure on a cadaver. The feasibility of the proposed robotic system was validated.


Fig.13  Cadaver experiment setup


Fig.14 Cadaver experiment process

People Involved

Research Fellow: Liao Wu
PhD Student: Keyu Wu
FYP Student: Li Ting Lynette Teo
Intern Student: Jan Feiling and Xin Liu
Project Investigator: Hongliang Ren


[1] Liao Wu, Shuang Song, Keyu Wu, Chwee Ming Lim, Hongliang Ren. Development of a compact continuum tubular robotic system for nasopharyngeal biopsy. Medical & Biological Engineering & Computing. 2016.
[2] Keyu Wu, Liao Wu, Hongliang Ren. Motion planning of continuum tubular robots based on features extracted from statistical atlas. In: Proceedings of 2015 IEEE International Conference on Intelligent Robots and Systems (IROS 2015).
[3] Keyu Wu, Liao Wu, Chwee Ming Lim, Hongliang Ren. Model-free image guidance for intelligent tubular robots with pre-clinical feasibility study: towards minimally invasive trans-orifice surgery. In: Proceedings of 2015 IEEE International Conference on Information and Automation (ICIA 2015). ( best paper finalist)
[4] Benedict Tan, Liao Wu, Hongliang Ren. Prototype development of a handheld tubular curvilinear robot for minimally invasive surgery. In: The 11th Asian Conference on Computer Aided Surgery (ACCAS 2015).
[5] Keyu Wu†, Liao Wu†, Hongliang Ren. An image based targeting method to guide a curvilinear concentric tube robot. In: Proceedings of 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014). Bali, Indonesia, 2014: 386-391 († equally contributed author).

Simultaneous Hand-Eye, Tool-Flange and Robot-Robot Calibration for Co-manipulators by Solving AXB=YCZ Problem


Multi-robot co-manipulation shows great potential to address the limitations of using single robot in complicated tasks such as robotic surgeries. However, the dynamic setup poses great uncertainties in the circumstances of robotic mobility and unstructured environment. Therefore, the relationships among all the base frames (robot-robot calibration) and the relationships between the end-effectors and the other devices such as cameras (hand-eye calibration) and tools (tool-flange calibration) have to be determined constantly in order to enable robotic cooperation in the constantly changing environment. We formulated the problem of hand-eye, tool-flange and robot-robot calibration to a matrix equation AXB=YCZ. A series of generic geometric properties and lemmas were presented, leading to the derivation of the final simultaneous algorithm. In addition to the accurate iterative solution, a closed-form solution was also introduced based on quaternions to give an initial value. To show the feasibility and superiority of the simultaneous method, two non-simultaneous methods were also proposed for comparison. Furthermore, thorough simulations under different noise levels and various robot movements were carried out for both simultaneous and non-simultaneous methods. Experiments on real robots were also performed to evaluate the proposed simultaneous method. The comparison results from both simulations and experiments demonstrated the superior accuracy and efficiency of the simultaneous method.

Problem Formulation

Measurement Data:
Homogeneous transformations from the robot bases to end-effector (A and C), and from tracker to marker (B).
Homogeneous transformations from one robot base frame to another (Y), and from eye/tool to robot hand/flange (X and Z).
The measurable data A, B and C, and the unknowns X, Y and Z form a transformation loop which can be formulated as, AXB=YCZ (1).

Fig. 1: The relevance and differences among the problem defined in this paper and the other two classical problems in robotics. Our problem formulation can be considered as a superset of the other two.


Non-simultaneous Methods

3-Step Method
In the non-simultaneous 3-Step method, the X and Z in (1) are separately calculated as two hand-eye/tool-flange calibrations which can be represented as an AX = XB problem in the first and second steps. This results in two data acquisition procedures, in which the two manipulators carry out at least two rotations whose rotational axes are not parallel or anti-parallel by turns while the other one being kept immobile. The last unknown robot-robot relationship Y could be solved directly using the previously retrieved data by the method of least squares.
2-Step Method
The non-simultaneous 2-Step method formulates the original calibration problem in successive processes which solve AX = XB firstly, and then the AX = YB. The data acquisition procedures and obtained data are the same with the 3-Step method. In contrast to solving robot-robot relationship independently, the 2-Step method solves tool-flange/hand-eye and robot-robot transforms in an AX = YB manner in the second step. This is possible because equation AXB = YCZ can be expressed as (AXB)inv(Z) = YC, which is in an AX = YB form with the solution of X known.

Simultaneous Method

Non-simultaneous methods face a problem of error accumulation, since in these methods the latter steps use the previous solutions as input. As a result, the inaccuracy produced in the former steps will accumulate to the subsequent steps. In addition to accuracy, it is preferred that the two robots participating the calibration procedure simultaneously, which will significantly save the total time required.
In regards to this, a simultaneous method is proposed to improve the accuracy and efficiency of the calibration by solving the original AXB = YCZ problem directly. During the data acquisition procedure, the manipulators simultaneously move to different configurations and the corresponding data set A, B and C are recorded. Then the unknown X, Y and Z are solved simultaneously.



To illustrate the feasibility of the proposed methods, intensive simulations have been carried out under different noise situations and by using different numbers of data sets.

Fig. 2: A schematic diagram which shows the experiment setup consisting of two Puma 560 manipulators, a tracking sensor and a target marker to solve the hand-eye, tool-flange and robot-robot calibration problem.

Simulations Results

For the rotational part, the three methods perform evenly in the accuracy of Z. However, the simultaneous method slightly outperforms in the accuracy of X and significantly in the accuracy of Y than the other two non-simultaneous methods. The results of the translational part are similar to the rotational ones. For the solution of Z, the accuracy of the simultaneous method is as good as the 3-Step method but slightly worse than the 2-Step method. However, the simultaneous method achieves a significantly improvement in the accuracy of X and Y compared to the other two methods.

Experiments Results

Besides the simulation, ample real experiments have been conceived and carried out under different configurations to evaluate the proposed methods. As shown in Fig. 6, the experiments involved a Staubli TX60 robot (6 DOFs, averaged repeatability 0.02mm), a Barrett WAM robot (4 DOFs, averaged repeatability 0.05mm) and a NDI Polaris optical tracker (RMS repeatability 0.10mm). The optical tracker was mounted to the last link of the Staubli robot, referred to as sensor robot. The corresponding reflective marker was mounted to the last link of the WAM robot, referred to as marker robot.

Fig. 6: The experiment is carried out by using a Staubli TX60 robot and a Barrett WAM robot. A NDI Polaris optical tracker is mounted to the Staubli robot to track a reflective marker (invisible from current camera angle) that is mounted to the WAM robot.

To demonstrate the superiority of the simultaneous method in the real experimental scenarios, a 5-fold cross-validation approach is implemented for 200 times for all the calibration methods under all system configurations. For simultaneous method, after data alignment and RANSAC processing, 80% of the remaining data are randomly selected to calculate unknown X, Y, and Z, and 20% are used as test data to evaluate the performance. For 2-Step and 3Step methods, after calculating the unknowns by each method, same test data from the simultaneous method are used to evaluate their performances.

In Fig. 7, the evaluated errors of 200 times 5-fold cross-validation for three proposed methods at three ranges are shown as box plots. Left-tail paired-samples t-tests have been carried out to compare the performances of simultaneous method versus 2-Step and 3-Step methods, respectively. The results indicate that the rotational and translational errors from the simultaneous method are very significantly smaller than the 2-Step and 3-Step methods. Only two non-significant results exist in the rotational performances at medium and far ranges when comparing the simultaneous method with the 3-Step one. Nevertheless, the simultaneous method outperforms the non-simultaneous ones for translation error at all ranges.


Fig. 7: Results of 200 times 5-fold cross-validation and left-tail paired-samples t-test at the near, medium and far ranges. The box plots show the rotational and translational error distributions for three methods at three ranges. **, * and N.S. stands for very significant at 99% confidence level, significant and non-significant at 95% confidence level.

Related Publications

1. Liao Wu, Jiaole Wang, Max Q.-H. Meng, and Hongliang Ren, imultaneous Hand-Eye, Tool-Flange and Robot-Robot Calibration for Multi-robot Co-manipulation by Solving AXB = YCZ Problem, Robotics, IEEE Transactions on (Conditionally accepted)
2. Jiaole Wang, Liao Wu and Hongliang Ren, Towards simultaneous coordinate calibrations for cooperative multiple robots, Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on. IEEE, 2014: 410-415.

Institute & People Involved

The Chinese University of Hong Kong (CUHK): Jiaole Wang, Student Member, IEEE; Max Q.-H. Meng, Fellow, IEEE
National University of Singapore (NUS): Liao Wu; Hongliang Ren, Member, IEEE


-Calibration Experiments

Tip Tracking and Shape Sensing for Flexible Surgical Robots

Project Goals

As a typical minimally invasive surgery, transoral surgery brings to patients significant benefits such as decreased intra-operative blood loss, less post-operative complication morbidity, shorter hospitalization length and recovery period. Flexible surgical robot (such as tendon/wire/cable-driven robot and concentric tube robot) is an efficient device for transoral inspection and diagnosis. It can work well in complicated and confined environments. One drawback of this method is that the real time tip position and shape information cannot be well estimated, especially when there is payload on the end effector. To address these challenges, we focus on a novel tip tracking and shape sensing method for the flexible surgical robot.


The proposed method is based on the positional and directional information of limited specific joints of the robot, which are estimated with an electromagnetic tracking system. Electromagnetic sensors have been mounted in the tip of the robot to provide the tip position and direction information. Based on the section number of the robot, some other sensors will be mounted in the specific position of the robot to realize the shape sensing. The shape sensing method is based on multi quadratic Bézier curves.

Fig.1 Electromagnetic tracking method.

The electromagnetic tracking method is shown in Fig.1. A uniaxial sensing coil is used as the target and sensing the magnetic field that generated by the six transmitting coils. These six coils are stimulated sequentially. The position and orientation information of the sensing coil can then be estimated based on the sensing signals.

Fig.2 Shape sensing method.

Fig.2 shows the shape sensing method for multi-section flexible by using multi quadratic Bézier curves. For a N sections robot, ⌈N/2⌉ electromagnetic sensors will be mounted in the tail of the (N-2k)th section, where 0≤k<n/2. Therefore, by utilizing the positional and directional information of the sensors, each section can be reconstructed based on a quadratic Bezier curve. Compared to the image based method, this method is easy to setup; compared to the FBG based method, curvature information is not used and fewer sensors are needed in the proposed method.

Results and Remarks


Fig.3 Two Uniaxial electromagnetic sensing coils are mounted in both ends of the device.

We have applied the method on a 10-joints wire-driven flexible robot. As shown in Fig.3, two uniaxial electromagnetic sensors (Aurora Shielded and Isolated 5DOF Sensor, 0.9* 12mm) have been mounted on both ends of the robot. Fig.4 shows the average errors of the experimental results of each S shape curve reconstruction in the experiments. The whole average error is 1.4mm.
We have also applied the method on a two-section concentric tube. As shown in Fig.5, a uniaxial sensor has been mounted in the tip of the robot. The tracking results can be seen in the video.

Fig.4 Experimental results for the wire-driven flexible robot.


Fig.5 Tip tracking and shape sensing for concentric tube robot. The result can be seen in the video below.

The primary contributions of our work are summarized as follows:
1)A shape sensing method based on Bézier curve fitting and electromagnetic tracking is proposed. This method needs only the positional and directional information of some specific position of the curved robot.
2)Only limited sensors are needed, and thus very few modifications are required on the robot.
3)Compared with other methods, the proposed method is easy to set up and has a good accuracy.

People Involved

Staff: Shuang Song, Zheng Li
Investigators: Hongliang Ren, Haoyong Yu



[1] Shuang Song, Wan Qiao, Baopu Li, Chao Hu, Hongliang Ren and Max Meng. “An Efficient Magnetic Tracking Method Using Uniaxial Sensing Coil”. Magnetics, IEEE Transactions on, 2014. 50(1), Article#: 4003707
[2] Shuang Song, Hongliang Ren and Haoyong Yu. “An Improved Magnetic Tracking Method using Rotating Uniaxial Coil with Sparse Points and Closed Form Analytic Solution”. IEEE Sensors Journal, 14(10): 3585-3592, 2014
[3] Shuang Song, Baopu Li, Wan Qiao, Chao Hu, Hongliang Ren, Haoyong Yu, Qi Zhang, Max Q.-H. Meng and Guoqing Xu. “6-D Magnetic Localization and Orientation Method for an Annular Magnet Based on a Closed-Form Analytical Model”. IEEE Transactions on Magnetics. 2014, 50(9), Article#: 5000411

ETH Image Based Visual Servoing to Guide Flexible Robots

Video Demo

Eye-To-Hand Image Based Visual Servoing to Guide Flexible Robots

Project goals

Flexible robots including active cannula or cable driven continuum robots are typically suitable for such minimally invasive surgeries because they are able to present various flexible shapes with great dexterity, which strengthens the ability of collision avoidance and enlarges the reachability of operation tools. Using model based control method will lead to artificial singularities and even inverted mapping in many situations because the models are usually developed in free space and cannot perform effectively in constrained environments. Therefore, the goal of this project is control the motion of a tentacle-like curvilinear concentric tube robot by model-less visual servoing.


A two-dimensional planar manipulator is constructed by enabling only the three translation inputs of a six DOF concentric tube robot. As shown in Fig. 1, the concentric tube manipulator is controlled using a PID controller and the images captured by an uncalibrated camera are used as visual feedback.

Fig. 1. The experimental setup includes a concentric tube robot, a camera, a laptop, a marker and a target.

The visual tracking of the concentric tube robot is based on shape detection. The circular marker is attached to the tip of the concentric tube robot and a square target is given for the tip to trace. During the experiments, the coordinates of the marker centroid and target centroid are calculated while the next target position is calculated at the same time as shown in Fig. 2.

Fig. 2. Working mechanism of the system. Top: translations of the three tubes. Bottom: marker, final target and the next target position on the image plane.


Fig. 3. Overview of the control algorithm. The Jacobian matrix is estimated based on the measurements of each incremental movement detected from the camera.

The framework of the controlling the robot is shown in Fig. 3. The initial Jacobian matrix is acquired by running each individual motor separately and measuring the change of tip position of the robot in the image space. Then the optimal control is achieved by solving a typical redundant inverse kinematics. And finally the Jacobian matrix is continuously estimated based on the measured displacements.


To evaluate the proposed model-less algorithm, a simulation was carried out on MATLAB first. The desired and actual trajectory was shown in Fig. 4, from which it could be seen that the robot succeeded in following the reference trajectory and reaching the target position.

Fig. 4. Simulation of using the proposed algorithm to control a concentric tube robot.

The proposed algorithm was also implemented on a physical concentric tube robot in free space. It was found the robot was able to reach goal with zero steady state error in all trials as shown in Fig. 5.

Fig. 5. The concentric tube robot is able to reach a desired goal using the proposed method. Top: the motion of the robot. Bottom: the reference and actual trajectories of two experiments.

People involved

Staff: Keyu WU, Liao WU
PI: Hongliang REN


1. Keyu Wu, Liao Wu and Hongliang Ren, “An Image Based Targeting Method to Guide a Tentacle-like Curvilinear Concentric Tube Robot”, ROBIO 2014, IEEE International Conference on Robotics and Biomimetics, 2014.

Surgical Tracking Based on Stereo Vision and Depth Sensing

Project Goals:

The objective of this research is to incorporate multiple sensors at broad spectrum, including stereo infrared (IR) cameras, color (or RGB) cameras and depth sensors to perceive the surgical environment. Features extracted from each modality can contribute to the cognition of complex surgical environment or procedures. Additionally, their combination can provide higher robustness and accuracy beyond what is obtained from single sensing modality. As a preliminary study, we propose a multi-sensor fusion approach for localizing surgical instruments. We developed an integrated dual Kinect tracking system to validate the proposed hierarchical tracking approach.


This project considers the problem of improving the surgical instrument tracking accuracy by multi-sensor fusion technique in computer vision. We proposed a hierarchical fusion algorithm for integrating the tracking results from depth sensor, IR camera pair and RGB camera pair. Fig. 1 summarized the algorithm involved in this project. It can be divided into the “low-level” and the “high-level” fusion.

Fig. 1 Block diagram of hierarchical fusion algorithm.

The low-level fusion is to improve the speed and robustness of marker feature extraction before triangulating the tool tip position in IR and RGB camera pair. The IR and RGB camera are modeled as pin-hole cameras.  The depth data of the tool can be used as a priori for marker detection. The working area of the tracking tool is supposed to be limited in a reasonable volume v(x, y, z) that can be used to refine the search area for feature extraction, which could reduce the computational cost for real-time applications.
The high-level fusion is to reach a highly accurate tracking result by fusing two measurements. We employ the covariance intersection (CI) algorithm to estimate a new tracking result with less covariance.


To demonstrate the proposed algorithm, we designed a hybrid marker-based tracking tool (Fig. 2) that incorporates the cross-based feature in visible modality and retro-reflective marker based feature in infra-red modality to get a fused tracking of the customized tool tip. To evaluate the performance of the proposed method, we employ two Kinects to build the experimental setup. Fig. 3 shows the prototype of multi-sensor fusion tracker for the experiment, which indicates that the CI-based fusion approaches obviously tend to be better than the separate IR tracker or RGB tracker.  The mean error and deviation of the fusion algorithm are all improved.
Hybrid marker

Fig. 3 Dual Kinect tracking system

People Involved

Staffs: Wei LIU, Shuang SONG, Andy Lim
Advisor: Dr. Hongliang Ren
Collaborator: Wei ZHANG


[1] Ren, H.; LIU, W. & LIM, A. Marker-Based Instrument Tracking Using Dual Kinect Sensors for Navigated Surgery IEEE Transactions on Automation Science and Engineering, 2013
[2] Liu, W.; Ren, H.; Zhang, W. & Song, S. Cognitive Tracking of Surgical Instruments Based on Stereo Vision and Depth Sensing, ROBIO 2013, IEEE International Conference on Robotics and Biomimetics, 2013

Related FYP Project

Andy Lim: Marker-Based Surgical Tracking With Multiple Modalities Using Microsoft Kinect


[1] H. Ren, D. Rank, M. Merdes, J. Stallkamp, and P. Kazanzides, “Multi-sensor data fusion in an integrated tracking system for endoscopic surgery,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 1, pp. 106 – 111, 2012.
[2] W. Liu, C. Hu, Q. He, and M.-H. Meng, “A three-dimensional visual localization system based on four inexpensive video cameras,” in Information and Automation (ICIA), 2010 IEEE International Conference on. IEEE, 2010, pp. 1065–1070.
[3] F. Faion, S. Friedberger, A. Zea, and U. D. Hanebeck, “Intelligent sensor-scheduling for multi-kinect-tracking,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 3993–3999.

3D Ultrasound Tracking and Servoing of Tubular Surgical Robots


[Pediatric Cardiac Bioengineering Lab of Children’s Hospital Boston, Harvard Medical School, USA]
[Philips Research]


Ultrasound imaging is a useful modality for guiding minimally invasive interventions due to its portability and safety. In cardiac surgery, for example, real-time 3D ultrasound imaging is being investigated for guiding repairs of complex defects inside the beating heart. Substantial difficulty can arise, however, when surgical instruments and tissue structures are imaged simultaneously to achieve precise manipulations. This research project includes: (1) the development of echogenic instrument coatings, (2) the design of passive instrument markers, and (3) the development of algorithms for instrument tracking and servoing. For example, a family of passive markers has been developed by which the position and orientation of a surgical instrument can be determined from a single 3D ultrasound volume using simple image processing. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing.
For example, a family of passive markers has been developed by which the position and orientation of a surgical instrument can be determined from a single 3D ultrasound volume using simple image processing. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. The design principles for marker shapes ensure imaging system and measurement uniqueness constraints are met. Error analysis is used to guide marker design and to establish a lower bound on measurement uncertaintanty. Experimental evaluation of marker designs and tracking algorithms demonstrate a tracking accuracy of 0.7 mm in position and 0.075 rad in orientation.
Another example is to investigate the problem of automatic curve pattern detection from 3D ultrasound images, because many surgical instruments are curved along the distal end during operation, such as continuum tube robot, and catheter insertion etc. We propose a two-stage approach to decompose the six parameter constant-curvature curve estimation problem into a two stage parameter estimation problems: 3D spatial plane detection and 2D circular pattern detection. The algorithm includes an image-preprocessing pipeline, including thresholding, denoising, connected component analysis and skeletonization, for automatically extracting the curved robot from ultrasound volumetric images. The proposed method can also be used for spatial circular or arc pattern recognition from other volumetric images such as CT and MRI.
Additional related information at [Pediatric Cardiac Bioengineering Lab of Children’s Hospital Boston, Harvard Medical School]

Surgical Tracking System for Laparoscopic Surgery

ERC-CISST, LCSR Lab of Johns Hopkins University, USA
Fraunhofer Germany (FhG)

Laparoscopic surgery poses a challenging problem for a real-time intra-body navigation system: how to keep tracking the surgical instruments inside the human body intra-operatively. This project aims to develop surgical tracking technology that is accurate, robust against environmental disturbances, and does not require line-of-sight. The current approach is to combine electromagnetic and inertial sensing. Sensor fusion methods are proposed for a hybrid tracking system that incorporates a miniature inertial measurement unit and an electromagnetic navigation system, in order to obtain continuous position and orientation information, even in the presence of metal objects.
Additional information at [SMARTS Lab of Johns Hopkins University]

Biomedical Application of Wireless Heterogeneous Sensor Networks

We study a typical heterogeneous network, a Wireless Biomedical Sensor Network (WBSN), as it consists of various types of biosensors to monitor different physiological parameters. WBSN will help to enhance medical services with its unique advantages in long-term monitoring, easy network deployment, wireless connections, and ambulatory capabilities. The network protocol plays an important role in carrying out the medical and healthcare services. Many unique challenges exist in WBSN design for medical and healthcare services, including extensive optimization problems in network protocol design to deal with power scheduling and radiation absorption concerns. Concerning these issues, we present a systematic solution to the wireless biomedical sensor network in our project named MediMesh. We develop a prototypical test-bed for medical and healthcare applications and evaluate the radiation absorption effects and efficiency. A lightweight network protocol is proposed, taking into consideration of the radiation absorption effects and communication overhead. After data acquisition from the sink stations, a data publishing system based on web service technology is implemented for typical medical and healthcare monitoring services in a hospital or home environment.