FYP Project Goals
The aim of this project is to perform tracking of surgical instruments utilizing the Kinect sensors. With the advances in computing and imaging technologies in the recent years, visual limitations during surgery such as those due to poor depth perception and limited field of view, can be overcome by using computer-assisted systems. 3D models of the patient’s anatomy (obtained during pre-operative planning via Computed Tomography scans or Magnetic Resonance Imaging) can be combined with intraoperative information such as the 3D pose and orientation of surgical instruments. Such a computer-assisted system will reduce surgical mistakes and help identify unnecessary or imperfect surgical movements, effectively increasing the success rate of the surgeries.
For computer-assisted systems to work, accurate spatial information of surgical instruments is required. Most surgical tools are capable of 6 degrees of freedom (6DoF) movement, which includes the translation in the x, y, z- axes as well as the rotation about these axes. The introduction of Microsoft Kinect sensor raises the possibility of an alternative optical tracking system for surgical instruments.
This project’s objective would be the development of an optical tracking system for surgical instruments utilising the capabilities of the Kinect sensor. In this part of the project, the focus will be on marker-based tracking using the Kinect sensor.
- The setup for the tracking of surgical instruments consists of two Kinects placed side by side with overlapping field of views.
- The calibration board used to find out the intrinsic camera parameters as well as the relative position of the cameras. This allows us to calculate the fundamental matrix, which is essential for epipolar geometry calculations used in 3D point reconstruction. (a) without external LED illumination (b) with LED illumination. The same board is used for RGB camera calibration.
- Seeded region growing allows the segmentation of retro-reflective markers from the duller background. The algorithm is implemented through OpenCV.
- Corner detection algorithm: the cornerSubPix algorithm from OpenCV is used to refine the position of the corners. This results in sub-pixel accuracy of the corner position.
- The RMS error for IRR and checkerboard tracking ranges from 0.37 to 0.68 mm and 0.18 to 0.37 mm respectively over a range of 1.2 m. Checkerboard tracking is found to be more accurate. Error increases with distance from camera.
- The jitter for the checkerboard tracking system was investigated and it was found to range from 0.071 mm to 0.29 mm over the range of 1.2 m.
- (dots) Measurement of jitter plotted against the distance from the left camera. (line) the data is fitted to a polynomial of order 2 to analyze how jitter varies with depth.
FYP Student: Andy Lim Yong Mong
Research Engineer: Liu Wei
Advisor: Dr. Ren Hongliang
 Sun, W., Yang, X., Xiao, S., & Hu, W. (2008). Robust Checkerboard Recognition for Efficient Nonplanar Geometry Registration in Projector-camera Systems. Proceedings of the 5th ACM/IEEE International Workshop on Projector camera systems. ACM.
 R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2 ed., Cambridge: Cambridge University Press, 2003.
 Q. He, C. Hu, W. Liu, N. Wei, M. Q.-H. Meng, L. Liu and C. Wang, “Simple 3-D Point Reconstruction Methods With Accuracy Prediction for Multiocular System, “IEEE/ASME Transactions on Mechatronics, vol. 18, no. 1, pp. 366-375, 2013