Presentation and demonstration videos
EIH Safety-Enhanced Model-Free Visual Servoing (Summery Video)
Eye-To-Hand IBVS ROBIO Demo
[ROBIO]Eye-To-Hand Image Based Visual Servoing to Guide Flexible Robots (more info)
iTubot IV System
EIH Visual servoingย inside a skull model
EIH Visual servoingย in cadaver experiments
TORS IBVS with few environment constraints vs TORS IBVS with more environment constraints
EIH MFVS Model-Free Visual servoingย ICIA
Project Goals
Extra sensing information is required to track dynamic oral targets and compensate undesired disturbance. Among all methods that utilize different sensing modalities, direct vision based methods are advantageous because they do not require much modification of the hardware and can provide accurate and reliable information to assist the robot control. The visual servoing control mode is then formed when the visual feedback is regarded as the input to the robot control loop. Since position-based visual servoing is generally more sensitive to noise and involved with calibration and reconstruction errors, image-based visual servoing is more preferable in the surgeries.
So far, adaptive image-based visual servoing algorithms have been developed based on known kinematic model of the cable-driven continuum manipulator. However, the model of a continuum robot is no longer accurate when the robot contacts with obstacles in a confined space. Besides, it also requires calibration between the camera and the robot. In addition, the model-based methods are usually more complicated due to the involvement of complex model information. Therefore, model-less visual servoing is more preferable in many cases.
The objective of this project is to develop a model-less eye-in-hand image-based visual servoing that is stable, efficient and safe enough to be applied in tranoral and transnasal procedures.
Approach
The overall framework of the proposed visual servoing technique is depicted in Fig. 1.ย Since no prior information on kinematicย model of the robot is given and no calibration is carriedย out between the robot and the camera, the initialization ofย the image Jacobian is achieved by running each individualย motor separately and measuring the position change of theย target object on the image plane. Also, the desired displacement of the surgical site is calculated based on its current and desired coordinates in the image space.ย The robot is subsequently controlled using the estimated image Jacobian to achieve the acquired displacement of the target objectย on the image plane. However, due to the unknown environment and the inaccuracy of the Jacobian estimation, the actual displacement of the target object is usually not the same as the expected one. Based on the difference between expected and actual displacements of the target on the image plane, the image Jacobian can be updated. The robot control and the Jacobian update processes are implemented iteratively until the task is accomplished.

Fig. 1.ย The workflow of the model-less image-based ย visual servoing technique.
Besides the aforementioned primary task, other tasks can also be achieved at the same time while having no effect on the primary task. Considering the confined space in surgery, the range of the robot movement is expected to be minimized so as to avoid unnecessary harm to the critical blood vessels and healthy tissues of the patient. To this end, the secondary task is set to minimize the sweeping of the robot. Generally, the positions of a couple of critical points on the robots can be detected by the EM tracking system as shown in Fig. 2 and used to reconstruct the shape of the robots. Based on the reconstructed shape, the sweeping of the robot can be evaluated and minimized during the visual servoing process.

Fig. 2.ย A flexible robotic manipulator with micro camera and the EM trackers mounted at its distal end.
Results
Some preliminary experiments have been conducted to evaluate the performance of the proposed visual servoing algorithm.ย The first experiment is conducted insides a skull model where the visual servoing algorithm is implemented inside the nasal cavity. The experiment setup is shown in Fig. 3. The objective of the experiment is to adjust the pose of the robot so that the target object in sight is moved to the desired position indicated using a yellow cross on the image plane. Besides, a needle is inserted inside the robot in advance and used to better visualize the efficiency of the proposed visual servoing algorithm. At the beginning of the experiment, the needle is extended outside of the tip of the robot while the needle is extended out of the tip section again at the end of the experiment to touch the target region.

Fig. 3.ย Experiment setup of the visual servoing experiment conducted inside a skull model.
The image-based visual servoing procedure is presented using several images captured by the micro-camera at different stages of the visual servoing in Fig. 4. The proposed visual servoing algorithm is capable to moveย the target to the desired position by controlling the pose of the robot. Besides, the needle insertion process is also demonstrated in Fig. 4 and the positions of the needle before and after implementing visual servoing are compared, which indicates that the visual servoing algorithm is able to bring the surgical tools to desired surgical sites automatically and precisely.

Fig. 4.ย The visual servoing process implemented inside a skull model. Left: The images captured by the micro camera attached at the tip of one of the channels of the continuum robot. The yellow cross represents the desired position while the black curve and the red dot represent the contour and center of the target, respectively. Right: The side view of the visual servoing process showing the needle insertion procedure before, during and after implementing the visual servoing algorithm.
The visual servoing algorithm is also validated based on the cadaveric experiment results. The setup of theย image guided transoral robotic surgery by visual servoing is shown in Fig. 5.ย The objective of the experiment is to drive the flexible robotic manipulator mounted with distal endoscope so that the target surgical site marked using a laser light source is moved to the desired location on the image plane.

Fig. 5. Experiment setupย of theย image guided transoral robotic surgery based on theย visual servoing technique.
During the visual servoing process, the shape of the robot is reconstructed based on the electromagnetic tracking information and the real-time navigation interface is displayed on the surgical navigation workstation.ย The visual servoing procedure is recorded and shown in Fig. 6 where the yellow circle indicates the desired position of the target object, the black curve represents the contour of the target object highlighted using laser pointer and the red dot is the center of the target object.

Fig. 6.ย The image sequence obtained during the transoral visual servoing process. The first figure shows the start state while the last figure shows the final state where the target object is within the desired region. The yellow circle represents the desired region while the black curve and the red dot indicate the contour and center of the target object, respectively. The target object is highlighted using a laser pointer.
People Involved
PI: Hongliang Ren (NUS), Chwee Ming Lim (NUH)
Student: Keyu Wu
Staff: Liao Wu
Publications
- Wu, K.; Wu, L. & Ren, H. An Image Based Targeting Method to Guide a Tentacle-like Curvilinear Concentric Tube Robot ROBIO 2014, IEEE International Conference on Robotics and Biomimetics, 2014, 386-391
- Wu, K.; Wu, L.; Lim, C. M. & Ren, H. Model-free Image Guidance for Intelligent Tubular Robots with Pre-clinical Feasibility Study: Towards Minimally Invasive Trans-orifice Surgery ICIA 2015, IEEE International Conference on Information and Automation, 2015, 749-754