Nasopharyngeal Carcinoma Surveillance

Project Goals:

Nasopharyngeal carcinoma (NPC) is a tumor arising from the epithelial cells that cover the surface and line the nasopharynx. The concern about NPC in our studies is that it is more common in regions in East Asia and Africa, specifically Southeast Asia. Due to the high tendency for NPC to develop into metastatic dissemination, about 30- 60% of locally advanced patients will develop distant metastasis and die of disseminated disease. Thus this implies that apart from early diagnosis, it is also of paramount importance to locally monitor for the recurrence of NPC or the development of distant metastasis. Therefore, there is a need for a patient-operated, in-vivo surveillance system.

Approaches:

The approach for this project is that it has to be a remote surveillance system that is able to monitor the growth of the tumor in the nasopharyngeal region independently by the patient. The design requirements are as follows below:
1. The device must be made of medically approved materials that are mechanically strong enough to withstand 1 to 2 years of constant use. This is because the average time period for NPC surveillance spans to around 2 years.
2. The device must be durable so as to last the entire time period of use.
3. The device must house a camera module, which is rotatable to the minimum of 90 degrees, such that it can accurately pan throughout the entire nasopharynx region.
Additional aims were also realized in the device design as follows:
1. The device must be made in an economically feasible manner, such that it is able to reduce medical costs as much as possible.
2. The device outcome must be similar to any other method of NPC surveillance so that the quality of the monitoring system is not compromised.
3. The device must be patient-administered, meaning that the patient is able to deploy and use the device without the assistance of medical personnel. This is so as to decrease patient dependence on the healthcare system and also an attempt to decrease the burden on clinicians.
4. The device must be as safe and hassle-free to the patient as possible.

Results and Remarks

The first prototypical device comprises of the following components:
1. Camera Module
2. Arm Head and Hook
3. Arm
4. Wheel and Cover
5. Handle
The camera module houses a mini-camera, which will be able to obtain imaging output from within the nasopharyngeal region after its deployment into the nasal cavity. This housing contains two axles on either side, to enable the rotational mechanism. The camera module is then attached to the arm head, which clicks the axle of the camera into place.
The arm head also contains a hook, whose purpose is to ensure the stability of the device once deployed to the site of the vomer bone. The hook will allow the easy positioning of the camera module to the site and also increase the stability of the device during image capture. This decreases the possibilities of blurred imaging outputs.
The rotational mechanism is mainly powered by three components: (i) the camera module and axle, (ii) the arm, and (iii) the wheel. A thin piece of nylon wire is first threaded through the camera module, then threaded down the tunnels in the arm and finally around the wheel. The rotation mechanism works when the wheel is turned in either direction.
The results of the design verification experiments show that the maximum force required to fracture the Veroclear tip and head are 5.1N and 15.6N respectively. The maximum tensile strength of nylon is 262.4 MPa. The arm can bend to a maximum of 7.3 cm with a force of 15.3N before fracture. FEA shows that even under an exaggerated maximum loading of 100N, the device does not fracture.
The device is able to capture images from within the nasopharyngeal region from the mini camera. This was done by inserting the device through the nasal passageway of a phantom skull and the tumor is shown by the piece of BluTack.
The forces obtained for fracture and/or bending of the Veroclear arm are well beyond the acceptable range of forces as set by the acceptance criteria. This shows that the Veroclear arm passes the first stage of verification testing to determine is safety and suitability in this design. However, to add further to the safety of the device, it is aimed that the final device will be made of a much more mechanically strong material, which is also medically approved: 316L Stainless Steel. From the extrapolated calculations and research of 316L Stainless Steel through FEA, it can be concluded that 316L Stainless Steel is also a good, if not better choice of material for the manufacturing of this device due to its excellent mechanical strength and durability.
Video to be uploaded

People Involved

Undergraduate Students: Neerajha Ram, Khor Jing An, Paul Ng, Ong Jun Shu, Anselina Goh
Advisor: Dr. Hongliang Ren

Awards

Awarded the MOST ELEGANT DESIGN INSTRUMENTATION AWARD at the BN3101 Presentations 2013.

BN5209 Neurosensors and Signal Processing AY13/14

BN5209 Neurosensors and Signal Processing Semester 2, 2013/2014

SCHEDULE

Time period: 14-Jan-14 To 9-May-14
Lecture Time:

  • Tuesday: 5 pm – 7 pm (SINAPSE)
  • Thursday: 5 pm – 7 pm (SINAPSE)

Syllabus

  • Week 1: Mon 13 Jan – Fri 17 Jan
    Introduction to the Course and Introduction to Neurosciences, Neurophysiology
    (Quiz)
  • Week 2: Mon 20 Jan – Fri 24 Jan
    Neural recording methods: Microelectrodes, MEMS, optical neuro sensors
    (Notes) (Quiz)
  • Week 3: Mon 27 Jan – Fri 31 Jan
    Neural recording methods: Neural circuits, amplifiers, telemetry, stimulation
    (Notes)(Quiz 1, Quiz 2)
  • Week 4: Mon 3 Feb – Fri 7 Feb
    Introduction of Signal Processing (Notes)
  • Week 5: Mon 10 Feb – Fri 14 Feb
    Neural signals (basic science) – action potentials (spikes) and analysis
    (Notes)
  • Week 6: Mon 17 Feb – Fri 21 Feb
    Neural signals (clinical applications)- EEG, evoked potentials
    (W6_Part1SpikeDataAnalysis.pdf;W6_Part2ElectrophysiologicalBasisOfNeuralRecordings_Kaiquan.ppt;W6_EEG_EP_NeuralSignalsClinical)
  • Recess Week Sat, 22 Feb 2014 ~ Sun, 2 Mar 2014
  • Week 7: Mon 3 Mar – Fri 7 Mar
    Brain machine interfaces (Notes)
  • Week 8: 10 Mar – Fri 14 Mar
    Multiple Dimensional Signal Processing (Notes) (Quiz)
  • Week 9: Mon 17 Mar – Fri 21 Mar
    Neuroimaging and Neurosurgery (Notes)
  • Week 10: Mon 24 Mar – Fri 28 Mar
    Neurosurgical systems and Microscopic Imaging
  • Week 11: Mon 31 Mar – Fri 4 Apr
    Optical imaging: Cellular (microscopy), In Vivo (speckle, Photoacoustic, OCT)
  • Week 12: Mon 7 Apr – Fri 11 Apr
    Applications of neural interfaces (peripheral and central cortical)
  • Week 13: Mon 14 Apr – Fri 18 Apr
    Project Reports/presentations

Course Projects

Please log in dropbox to view the materials.
1. EEG for brain state monitoring
2. EEG/EMG Feature Identification during Elbow Flexion/Extension

AIMS & OBJECTIVES

This module teaches students the advanced neuroengineering principles ranging from basic neuroscience introduction to neurosensing technology as well as advanced signal processing techniques.  Major topics include: introduction to neurosciences, neural recording methods, neural circuits, amplifiers, telemetry, stimulation, sensors for measuring the electric field and magnetic field of the brain in relation to brain activities, digitization of brain activities, neural signal processing, brain machine interfaces, neurosurgical systems and applications of neural interfaces. The module is designed for students at Master and PhD levels in Engineering, Science and Medicine.

PREREQUISITES

Basic probability
Basic circuits
Linear algebra (matrix/vector)
Matlab or other programming
Recommended Textbooks: Neural Engineering, Edited by Bin He

TEACHING MODES

The majority of the course will be in lecture-tutorial format. Some advanced topics will be in the formats of seminar and research presentations.

ASSESSMENT

In Class Quizzes (10 for 20% grade)
Take Home Tests (2 for 50% or Exam)
Labs/Projects (3 for 30%)

IVLE Registration and Information

<!–

Lectures and Guest Lectures

–>

Surgical Tracking Based on Stereo Vision and Depth Sensing

Project Goals:

The objective of this research is to incorporate multiple sensors at broad spectrum, including stereo infrared (IR) cameras, color (or RGB) cameras and depth sensors to perceive the surgical environment. Features extracted from each modality can contribute to the cognition of complex surgical environment or procedures. Additionally, their combination can provide higher robustness and accuracy beyond what is obtained from single sensing modality. As a preliminary study, we propose a multi-sensor fusion approach for localizing surgical instruments. We developed an integrated dual Kinect tracking system to validate the proposed hierarchical tracking approach.

Approaches:

This project considers the problem of improving the surgical instrument tracking accuracy by multi-sensor fusion technique in computer vision. We proposed a hierarchical fusion algorithm for integrating the tracking results from depth sensor, IR camera pair and RGB camera pair. Fig. 1 summarized the algorithm involved in this project. It can be divided into the “low-level” and the “high-level” fusion.

Fig. 1 Block diagram of hierarchical fusion algorithm.


The low-level fusion is to improve the speed and robustness of marker feature extraction before triangulating the tool tip position in IR and RGB camera pair. The IR and RGB camera are modeled as pin-hole cameras.  The depth data of the tool can be used as a priori for marker detection. The working area of the tracking tool is supposed to be limited in a reasonable volume v(x, y, z) that can be used to refine the search area for feature extraction, which could reduce the computational cost for real-time applications.
The high-level fusion is to reach a highly accurate tracking result by fusing two measurements. We employ the covariance intersection (CI) algorithm to estimate a new tracking result with less covariance.

Experiments

To demonstrate the proposed algorithm, we designed a hybrid marker-based tracking tool (Fig. 2) that incorporates the cross-based feature in visible modality and retro-reflective marker based feature in infra-red modality to get a fused tracking of the customized tool tip. To evaluate the performance of the proposed method, we employ two Kinects to build the experimental setup. Fig. 3 shows the prototype of multi-sensor fusion tracker for the experiment, which indicates that the CI-based fusion approaches obviously tend to be better than the separate IR tracker or RGB tracker.  The mean error and deviation of the fusion algorithm are all improved.
Hybrid marker

Fig. 3 Dual Kinect tracking system

People Involved

Staffs: Wei LIU, Shuang SONG, Andy Lim
Advisor: Dr. Hongliang Ren
Collaborator: Wei ZHANG

Publications

[1] Ren, H.; LIU, W. & LIM, A. Marker-Based Instrument Tracking Using Dual Kinect Sensors for Navigated Surgery IEEE Transactions on Automation Science and Engineering, 2013
[2] Liu, W.; Ren, H.; Zhang, W. & Song, S. Cognitive Tracking of Surgical Instruments Based on Stereo Vision and Depth Sensing, ROBIO 2013, IEEE International Conference on Robotics and Biomimetics, 2013

Related FYP Project

Andy Lim: Marker-Based Surgical Tracking With Multiple Modalities Using Microsoft Kinect

References

[1] H. Ren, D. Rank, M. Merdes, J. Stallkamp, and P. Kazanzides, “Multi-sensor data fusion in an integrated tracking system for endoscopic surgery,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 1, pp. 106 – 111, 2012.
[2] W. Liu, C. Hu, Q. He, and M.-H. Meng, “A three-dimensional visual localization system based on four inexpensive video cameras,” in Information and Automation (ICIA), 2010 IEEE International Conference on. IEEE, 2010, pp. 1065–1070.
[3] F. Faion, S. Friedberger, A. Zea, and U. D. Hanebeck, “Intelligent sensor-scheduling for multi-kinect-tracking,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 3993–3999.

FYP: Surgical Tracking With Multiple Microsoft Kinects

FYP Project Goals

The aim of this project is to perform tracking of surgical instruments utilizing the Kinect sensors. With the advances in computing and imaging technologies in the recent years, visual limitations during surgery such as those due to poor depth perception and limited field of view, can be overcome by using computer-assisted systems. 3D models of the patient’s anatomy (obtained during pre-operative planning via Computed Tomography scans or Magnetic Resonance Imaging) can be combined with intraoperative information such as the 3D pose and orientation of surgical instruments. Such a computer-assisted system will reduce surgical mistakes and help identify unnecessary or imperfect surgical movements, effectively increasing the success rate of the surgeries.
For computer-assisted systems to work, accurate spatial information of surgical instruments is required. Most surgical tools are capable of 6 degrees of freedom (6DoF) movement, which includes the translation in the x, y, z- axes as well as the rotation about these axes. The introduction of Microsoft Kinect sensor raises the possibility of an alternative optical tracking system for surgical instruments.
This project’s objective would be the development of an optical tracking system for surgical instruments utilising the capabilities of the Kinect sensor. In this part of the project, the focus will be on marker-based tracking using the Kinect sensor.

Approach

  • The setup for the tracking of surgical instruments consists of two Kinects placed side by side with overlapping field of views.
  • The calibration board used to find out the intrinsic camera parameters as well as the relative position of the cameras. This allows us to calculate the fundamental matrix, which is essential for epipolar geometry calculations used in 3D point reconstruction. (a) without external LED illumination (b) with LED illumination. The same board is used for RGB camera calibration.
  • Seeded region growing allows the segmentation of retro-reflective markers from the duller background. The algorithm is implemented through OpenCV.
  • Corner detection algorithm: the cornerSubPix algorithm from OpenCV is used to refine the position of the corners. This results in sub-pixel accuracy of the corner position.

Current Results

  • The RMS error for IRR and checkerboard tracking ranges from 0.37 to 0.68 mm and 0.18 to 0.37 mm respectively over a range of 1.2 m. Checkerboard tracking is found to be more accurate. Error increases with distance from camera.
  • The jitter for the checkerboard tracking system was investigated and it was found to range from 0.071 mm to 0.29 mm over the range of 1.2 m.
  • (dots) Measurement of jitter plotted against the distance from the left camera. (line) the data is fitted to a polynomial of order 2 to analyze how jitter varies with depth.

 

People Involved

FYP Student: Andy Lim Yong Mong
Research Engineer: Liu Wei
Advisor: Dr. Ren Hongliang

Related Project

Surgical Tracking Based on Stereo Vision and Depth Sensing

References

[1] Sun, W., Yang, X., Xiao, S., & Hu, W. (2008). Robust Checkerboard Recognition for Efficient Nonplanar Geometry Registration in Projector-camera Systems. Proceedings of the 5th ACM/IEEE International Workshop on Projector camera systems. ACM.
[2] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2 ed., Cambridge: Cambridge University Press, 2003.
[3] Q. He, C. Hu, W. Liu, N. Wei, M. Q.-H. Meng, L. Liu and C. Wang, “Simple 3-D Point Reconstruction Methods With Accuracy Prediction for Multiocular System, “IEEE/ASME Transactions on Mechatronics, vol. 18, no. 1, pp. 366-375, 2013

Statistical Humerus Implants and Associated Intramedullary Robotics

Project Goals

The sizes of current off-the-shelf humerus implants are unable to accommodate Asian patients since they are mainly produced for American and European populations according to locally collected data. By creating statistical humerus atlases based on Asian data, gender-specific and region-specific humeral implants can be developed by considering the characteristics of the statistical atlas constructed in order to improve stability of the fixation and avoid related complications. Besides, it is envisioned that the statistical atlas can serve as a critical reference for development and evaluation of robots in surgical procedures. Particularly, for the surgical and interventional procedures in the confined and rotated intramedullary space, the curvature and shape statistics of internal humerus canal is of great significance for the dedicatedly design of snake-like curvilinear tubular robot.

In this project, an efficient way has been demonstrated to construct statistical atlas by adopting an efficient alignment algorithm with improved efficiency and good accuracy. The constructed humerus atlas is then regarded as the reference for design of various humerus implants and development of snake-like concentric tube robots.

Approach

Statistical Atlas Construction: A three-step algorithm is adopted in statistical atlas construction, including segmentation, alignment and principal component analysis (PCA). Segmentation is to extract the desired surface mesh information of the humeri from the raw CT data and alignment is to align all the samples. The final step is to perform the principal component analysis of the shapes and represent the statistical model using principal components.

Creation and application of a statistical humerus atlas

Creation and application of a statistical humerus atlas

Centerline Extraction for Intramedullary Robot Design: The Laplacian-Based Contraction Method is adopted to extract the centerline of the humerus atlas. The purpose is to explore the intramedullary structure of the humerus since the curvature and shape statistics of internal humerus canal is significant for the design of snake-like curvilinear tubular robot.

Centerline Extraction Process

Centerline Extraction Process

Curvature Analysis for Design of Humerus Implants: The maximum principal curvature is depicted in the below Figure. The curvature analysis is to study the statistical surface curvature of the humerus atlas, in order to assist the design of humerus implants such as proximal and distal humerus locking plates, both used in orthopaedic trauma fixation.

Principal curvature of the statistical humerus atlas

Principal curvature of the statistical humerus atlas

Current Results

By adopting the novel atlas construction algorithm, the statistical humerus atlas is constructed as shown in the below figure, where the shape variation is along the first three principal components (PCs) and each row is generated by varying the shape with -3 to +3 standard deviations.

The variation along the first principal component is shown here (click to view the animation). From -3std to +3std, the length of the humerus model is increased while the width is decreased.

Shape variation along the 1st principal component

Shape variation along the 1st principal component

Moreover, by analyzing the characteristics of the humerus atlas, the intramedullary continuum robot design and the proximal humerus locking plate are depicted in the following figures.

Design of intramedullary continuum robot based on the statistical humerus atlas

Design of intramedullary continuum robot based on the statistical humerus atlas

Proximal humerus locking plate

Proximal humerus locking plate

People Involved

Staff: Keyu WU
Advisor: Hongliang REN
Clinicians: Keng Lin Wong, Zubin Jimmy Daruwalla, Diarmuid Murphy, National University Hospital

Publications

  • Wu K, Wong FKL, Ng SJK, Quek ST, Zhou B, Murphy D, Daruwalla ZJ and Ren H (2015), “Statistical atlas-based morphological variation analysis of the asian humerus: Towards consistent allometric implant positioning”, International Journal of Computer Assisted Radiology and Surgery. Vol. 10(3), pp. 317-327. Springer Berlin Heidelberg
  • Wu K, Daruwalla ZJ, Wong FKL, Murphyand D and Ren H (2015), “Development and Selection of Asian-specific Humeral Implants based on Statistical Atlas: Towards Planning Minimally Invasive Surgery”, International Journal of Computer Assisted Radiology and Surgery. Vol. 10(8), pp. 1333-1345. Springer Berlin Heidelberg.
  • Wu, K.; Wong, F. K. L.; Daruwalla, Z. J.; Murphy, D. & Ren, H. Statistical Humerus Atlas for Optimal Design of Asian-Specific Humerus Implants and Associated Intramedullary Robotics, ROBIO 2013, IEEE International Conference on Robotics and Biomimetics, 2013; (Best Paper Finalist award)

Supporting materials

Supplementary information for

Statistical Atlas Based Morphologic Variation Analysis of the Asian Humerus: Towards Consistent Allometric Implant Positioning

K. Wu, K. L. Wong, S. J. K. Ng, S. T. Quek, B. Zhou, D. P. Murphy, Z. J. Daruwalla, H. Ren*

 

All data are published in .nii format, which can be opened by ITK-SNAP (http://www.itksnap.org/pmwiki/pmwiki.php) and 3D Slicer (http://www.slicer.org/).

Table 1. Information of the subjects.

Study Number

Male/Female

Age

Race

Weight (kg)

Height (cm)

Exercise habit:

S = sedentary,

A = active

Left/Right:

L = left,

R = right

5260823

F

61

C

67.9

143

S

L

5190438

M

16

C

55.6

160

A

R

5186672

M

63

C

67.5

160

A

L

5172526

M

50

C

61.0

170

S

L

5018737

F

77

C

57.9

151

S

R

4636229

M

54

C

42.8

161

S

L

4585165

M

38

C

60.3

171

A

L & R

4481609

F

83

C

60.1

149

S

L

4448930

F

49

I

73.7

150

S

R

4320030

M

69

C

63.7

163

S

L

4111802

M

63

C

57.4

160

S

L

3969853

F

61

C

64.3

157

A

R

3564131

M

19

C

97.5

167

A

L

3495415

M

74

M

57.1

175

S

L

3827904

M

33

M

64.2

170

A

L

3722692

F

73

I

57.3

151

S

L

3768879

M

20

C

65.0

170

A

L

3771777

F

74

C

53.3

165

S

L & R

3159961

M

51

C

60.0

170

A

L

3273883

F

80

C

54.8

155

S

L

3384844

F

71

C

53.1

155

S

R

3353378

F

78

M

40.0

140

S

L

3271006

F

59

M

60.0

151

S

L

3189139

M

47

C

72.0

179

A

R

3132590

M

22

M

79.0

168

A

L

3143094

M

23

C

60.6

162

A

R

3282457

F

90

C

41.2

150

S

L

3296805

M

57

M

61.2

160

S

L

3326396

M

21

C

76.0

175

A

L

3338591

F

61

C

56.0

153

S

L & R

3060290

M

21

M

73.0

160

A

L

5581085

F

49

M

81.9

159

S

L & R

5691045

F

45

C

45.1

161

A

L

5731852

F

87

C

58.2

150

S

R

5733267

F

57

C

56.4

142

S

L & R

5622431

M

64

I

52.9

162

A

R

6007641

M

44

I

74.2

165

A

L

6013682

F

74

C

52.0

153

S

R

5504982

F

24

C

54.0

153

A

L

image001L
Fig. 1. Statistical male humerus atlas (left). The shape variation is along the first three principal components (PCs) and each row is generated by varying the shape with standard deviations ranging from -3√(λ_k ) to +3√(λ_k ).

image004R
Fig. 2. Statistical female humerus atlas (left). The shape variation is along the first three principal components (PCs) and each row is generated by varying the shape with standard deviations ranging from -3√(λ_k ) to +3√(λ_k ).

References

[1] H. Ren, N. V. Vasilyev, and P. E. Dupont, “Detection of curved robots using 3d ultrasound,” in Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011, pp. 2083–2089.
[2] H. Ren and P. E. Dupont, “Artifacts reduction and tubular structure enhancement in 3d ultrasound images,” in International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC, 2011.
[3] G. Chintalapani, L. M. Ellingsen, O. Sadowsky, J. L. Prince, and R. H. Taylor, “Statistical atlases of bone anatomy: construction, iterative improvement and validation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2007. Springer, 2007, pp. 499–506.
[4] H. Ren and P. E. Dupont, “Tubular enhanced geodesic active contours for continuum robot detection using 3d ultrasound,” in IEEE International Conference on Robotics and Automation, ICRA ’12, 2012.
[5] X. Kang, H. Ren, J. Li, and W.-P. Yau, “Statistical atlas based registration and planning for ablating bone tumors in minimally invasive interventions,” in Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on. IEEE, 2012, pp. 606–611.
[6] J. Cao, A. Tagliasacchi, M. Olson, H. Zhang, Z. Su, “Point Cloud Skeletons via Laplacian Based Contraction,” in Shape Modeling International Conference (SMI), pp. 21-23, 2010.

Ablation Planning in Computer-Assisted Interventions

Project Goals

Tumor ablation is the removal of tumor tissue and is considered as one type of minimally invasive interventions. It can be performed using techniques like cryoablation, high-intensity focused ultrasound (HIFU), and radiofrequency ablation (RFA). These techniques rely on minimally invasive principles to ablate tumor tissues, without having to directly expose the target regions to the environment. It has been widely noted that the success of a tumor ablation procedure hinges greatly on its pre-operative planning, which is often assisted by computational interventions. The proposed ablation planning system in this paper focuses mainly on the radiofrequency ablation (RFA) of hepatic tumors. This project is to develop computational optimization algorithms to plan optimal ablation delivery. Ablation planning systems are necessary to model the 3D interventional environments, identify feasible needle insertion trajectories and deploy ablating electrodes, while avoiding many critical structures.

Approach

Genetic Algorithm (GA) was used as it can be designed to consider the multi-objective nature of a tumor ablation planning system. The proposed ablation planning system is designed based on the following objectives: to achieve complete tumor coverage; and to minimize the number of ablations, number of needle trajectories and healthy tissue damage. These objectives are taken into account using an optimization method, Genetic Algorithm (GA). GA is capable of generating many solutions within a defined search space, and these solutions can be selected to undergo evolution based on a quantified value given by a fitness function. An exponential weight-criterion fitness function is used to represent the multiple objectives such as the number of ablation spheres, the number of trajectories, the covariance, and the coverage volume.

Current Results

The proposed mathematical protocol to determine the range of ablation spheres required to achieve complete tumor coverage is feasible to be used as a reference in the context of tumor ablation planning. The following figure shows how tumor coverage changed when trajectory optimization was considered: 0% tumor coverage (top), 100% tumor coverage with [ablation radius]=15 and [number of spheres]=3 (orange spheres) (bottom).

Publications

  • Ren, H.; Guo, W.; Ge, S. S. & Lim, W. Coverage Planning in Computer-Assisted Ablation Based On Genetic Optimization Computers in Biology and Medicine, in press, 2014
  • Lim, W. & Ren, H. Cognitive Planning Based on Genetic Algorithm in Computer-Assisted Interventions CIS-RAM 2013, 6th IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and the 6th IEEE International Conference on Robotics, Automation and Mechatronics (RAM), 2013

People Involved

FYP Student: Wan Cheng LIM
Graduate Student: Weian GUO
Advisor: Dr. Hongliang REN

References

[1] C. Baegert, C. Villard, P. Schreck, L. Soler, and A. Gangi, “Trajectory optimization for the planning of percutaneous radiofrequency ablation of hepatic tumors,” Computer Aided Surgery, 12(2): pp. 82-90, March, 2007.
[2] Z. Yaniv, P. Cheng, E. Wilson, T. Popa, D. Lindisch, E. Campos-Nanez, H. Abeledo, V. Watson, and F. Banovac, “Needle-Based Interventions With the Image-guided Surgery Toolkit (IGSTK): From Phantoms to Clinical Trials,” IEEE Trans. on Biomedical Engineering, vol. 57, no. 4, April, 2010.
[3] G. D. Dodd, M. C. Soulen, R. A. Kane, T. Livraghi, W. R. Lees, Y. Yamashita, A. R. Gillams, O. I. Karahan, H. Rhim. “Minimally invasive treatment of malignant hepatic tumors: At the threshold of a major breakthrough,” RadioGraphics, vol. 20, no. 1, January-February, 2000.
[4] C. Rieder, T. Kroger, C. Schumann, and H. K. Hahn, “GPU-Based Real-Time Approximation of the Ablation Zone for Radiofrequency Ablation”, IEEE Trans. On Visualization and Computer Graphics, vol. 17, no. 12, pp. 1812-1821, December, 2011.

Related Poster

BN5209 Neurosensors and Signal Processing AY12/13

BN5209 Neurosensors and Signal Processing Semester 2, 2012/2013

SCHEDULE

Time period: 15-Jan-13 To 10-May-13
Lecture Time:

  • Tuesday: 4:30 pm – 6:30 pm (SINAPSE)
  • Thursday: 4:30 pm – 6:30 pm (SINAPSE)

Syllabus

  • Week 1: Mon 14 Jan – Fri 18 Jan 2013
    Introduction to the Course and Introduction to Neurosciences, Neurophysiology
    (Quiz)
  • Week 2: Mon 21 Jan – Fri 25 Jan 2013
    Neural recording methods: Microelectrodes, MEMS, optical neuro sensors
    (Notes) (Quiz)
  • Week 3: Mon 28 Jan – Fri 1 Feb 2013
    Neural recording methods: Neural circuits, amplifiers, telemetry, stimulation
    (Notes)(Quiz 1, Quiz 2)
  • Week 4: Mon 4 Feb – Fri 8 Feb 2013
    Introduction of Signal Processing (Notes)
  • Week 5: Mon 11 Feb – Fri 15 Feb 2013
    Neural signals (basic science) – action potentials (spikes) and analysis
    (Notes)
  • Week 6: Mon 18 Feb – Fri 22 Feb 2013
    Neural signals (clinical applications)- EEG, evoked potentials
    (W6_Part1SpikeDataAnalysis.pdf;W6_Part2ElectrophysiologicalBasisOfNeuralRecordings_Kaiquan.ppt;W6_EEG_EP_NeuralSignalsClinical)
  • Week 7: Mon 4 Mar – Fri 8 Mar 2013
    Brain machine interfaces (Notes)
  • Week 8: 11 Mar – Fri 15 Mar 2013
    Multiple Dimensional Signal Processing (Notes) (Quiz)
  • Week 9: Mon 18 Mar – Fri 22 Mar 2013
    Neuroimaging and Neurosurgery (Notes)
  • Week 10: Mon 25 Mar – Fri 29 Mar 2013
    Optical imaging: Cellular (microscopy), In Vivo (speckle, Photoacoustic, OCT)
  • Week 11: Mon 1 Apr – Fri 5 Apr 2013
    Neurosurgical systems and Image Processing
  • Week 12: Mon 8 Apr – Fri 12 Apr 2013
    Applications of neural interfaces (peripheral and central cortical)
  • Week 13: Mon 15 Apr – Fri 19 Apr 2013
    Project Reports/presentations

Course Projects

Please log in dropbox to view the materials.
1. EEG for brain state monitoring
2. EEG/EMG Feature Identification during Elbow Flexion/Extension

AIMS & OBJECTIVES

This module teaches students the advanced neuroengineering principles ranging from basic neuroscience introduction to neurosensing technology as well as advanced signal processing techniques.  Major topics include: introduction to neurosciences, neural recording methods, neural circuits, amplifiers, telemetry, stimulation, sensors for measuring the electric field and magnetic field of the brain in relation to brain activities, digitization of brain activities, neural signal processing, brain machine interfaces, neurosurgical systems and applications of neural interfaces. The module is designed for students at Master and PhD levels in Engineering, Science and Medicine.

PREREQUISITES

Basic probability
Basic circuits
Linear algebra (matrix/vector)
Matlab or other programming
Recommended Textbooks: Neural Engineering, Edited by Bin He

TEACHING MODES

The majority of the course will be in lecture-tutorial format. Some advanced topics will be in the formats of seminar and research presentations.

ASSESSMENT

In Class Quizzes (10 for 20% grade)
Take Home Tests (2 for 50% or Exam)
Labs/Projects (3 for 30%)

IVLE Registration and Information

<!–

Lectures and Guest Lectures

–>

BN5209 Neurosensors and Signal Processing (10/11)

Course information from IVLE – Integrated Virtual Learning Environment

Module Code BN5209 (2010/2011 Semester 2)
Module Title NEUROSENSORS AND SIGNAL PROCESSING
Description This module teaches students the electrical and magnetic field of the human brain in relation to the brain activities and methods for sensing the electrical and magnetic field of human brain in relation to brain activities. Major topics include: the electric and magnetic field of the brain in relation to brain activities, sensors for measuring the electric field and magnetic field of the brain in relation to brain activities, digitization of brain activities – neural waves, characterization of neural waves, neural power map and neural matrix brain activity pattern recognition using neural power map and neural matrix, and applications of brain activity monitoring. The module is designed for students at Master and PhD levels in Engineering, Science and Medicine.
Module Credit 4
Workload 2-0-1-0-7

More Information can be found from IVLE – Integrated Virtual Learning Environment.

BN2203 Introduction to Bioengineering Design

Course information from IVLE – Integrated Virtual Learning Environment

Module Code BN2203
Module Title Introduction to Bioengineering Design
Description This module introduces the students to the basic elements for design of medical devices through a hands-on design project performed in teams. Examples of engineering analysis and design are applied to representative topics in bioengineering, such as biomechanics, bioinstrumentation, biomaterials, biotechnology, and related areas. Topics include: identification of the technological needs, design methodology, evaluation of costs and benefits, quality of life and ethical considerations.
Module Credit 4
Workload 1.5-1-0-4.5-3
Prerequisites BIE Stage 2 standing

More Information can be found from IVLE – Integrated Virtual Learning Environment.