PhD/Postdoc/Visiting Scholar/RA Opportunities on AI, Robotics & Perception at CUHK Hong Kong

PhD/Postdoc/RA (and Visiting Scholar/Prof/Ph.D.) Opportunities in AI, Robotics & Perception at CUHK Hong Kong

 

[RESEARCH AREA]

 

There are multiple openings for Postdoc/RA (and Visiting Scholar/Prof/Ph.D.) to perform research on Medical Robotics Perception & AI at The Chinese University of Hong Kong (CUHK, Hong Kong) starting immediately. Particularly, the main areas of interest include AI-assisted endoscopic diagnosis, biorobotics & intelligent systems, multisensory perception, AI learning and control in image-guided procedures, medical mechatronics, continuum, and soft flexible robots and sensors, deployable motion generation, compliance modulation/sensing, cooperative and context-aware flexible/soft sensors/actuators in human environments. For more details, please refer to the recent publications at Google Scholar or the lab website http://labren.org/.

 

The scholars will have opportunities to work with an interdisciplinary team consisting of clinicians and researchers from robotics, AI & perception, imaging, and medicine.
The salary/remunerations will be highly competitive and commensurate with qualifications and experience (e.g., Postdoc salary will be typically above 4300USD per month plus medical insurance etc.).

[QUALIFICATIONS]

* Background in AI, Computer Science/Engineering, Electronic or Mechanical Engineering, robotics, medical physics, automation, or mechatronics background
* Preferably have hands-on experience in AI/robots/sensors, instrumentation, intelligent systems

* Strong problem-solving, writing, programming, interpersonal, and analytical skills
* Outstanding academic records/publications or recognitions from worldwide top-ranking institutes
* Self-motivated and preferably with strong academic records 

[HOW TO APPLY]

Qualified candidates are invited to express their interests through an email with detailed supporting documents (including CV, transcripts, HK visa status, research interests, education background, experiences, GPA, representative publications, demo projects) to Prof. Hongliang Ren ASAP email: <hlren@ee.cuhk.edu.hk> Due to the significant amount of emails, we seek understandings that only shortlisted candidates will be informed/invited to interview.

๐ŸŽ™๏ธ Prof. Hongliang Ren presented at the 6th CCF China Intelligent Robot Academic Annual Meeting, sharing insights on โ€œ๐— ๐—ผ๐˜๐—ถ๐—ผ๐—ป ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ฎ๐—ป๐—ฑ ๐—ฃ๐—ฒ๐—ฟ๐—ฐ๐—ฒ๐—ฝ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—™๐—น๐—ฒ๐˜…๐—ถ๐—ฏ๐—น๐—ฒ ๐—ฅ๐—ผ๐—ฏ๐—ผ๐˜๐˜€ ๐—ถ๐—ป ๐— ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น๐—น๐˜† ๐—œ๐—ป๐˜ƒ๐—ฎ๐˜€๐—ถ๐˜ƒ๐—ฒ ๐—œ๐—ป๐˜๐—ฟ๐—ฎ๐—ฐ๐—ฎ๐˜ƒ๐—ถ๐˜๐˜† ๐—ฆ๐˜‚๐—ฟ๐—ด๐—ฒ๐—ฟ๐˜†โ€.

๐Ÿง  ๐—ง๐—ฎ๐—น๐—ธ ๐—›๐—ถ๐—ด๐—ต๐—น๐—ถ๐—ด๐—ต๐˜๐˜€:

The presentation explored the challenges and opportunities in motion generation and perception for flexible robots operating in minimally invasive surgical environments. Prof. Ren emphasized the importance of image-guided robotic systems in enhancing surgical precision, flexibility, and repeatabilityโ€”while acknowledging the complexities these systems introduce in development.

He shared recent advances from our lab in intelligent motion planning and perception, aiming to enable smart micro-imaging and guided robotic interventions. The proposed remote robotic system is tailored for surgical applications, empowering clinicians with multi-modal sensing and continuous motion generation for dexterous operations.

No alternative text description for this image
No alternative text description for this image

๐Ÿš€Excited to share that our paper โ€œ๐„๐ง๐๐จ๐•๐‹๐€: ๐ƒ๐ฎ๐š๐ฅ-๐๐ก๐š๐ฌ๐ž ๐•๐ข๐ฌ๐ข๐จ๐ง-๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž-๐€๐œ๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ ๐Ÿ๐จ๐ซ ๐€๐ฎ๐ญ๐จ๐ง๐จ๐ฆ๐จ๐ฎ๐ฌ ๐“๐ซ๐š๐œ๐ค๐ข๐ง๐  ๐ข๐ง ๐„๐ง๐๐จ๐ฌ๐œ๐จ๐ฉ๐ฒโ€ has been accepted to the Conference on Robot Learning (๐‚๐จ๐‘๐‹) 2025!

In this project, we tackled the unique challenges of robotic endoscopy by integrating vision, language grounding, and motion planning into one end-to-end framework. EndoVLA enables:

– Precise polyp tracking through surgeon-issued prompts

– Delineation and following of abnormal mucosal regions

– Adherence to circumferential cutting markers during resections

We introduced a dual-phase training strategy:

1. ๐’๐ฎ๐ฉ๐ž๐ซ๐ฏ๐ข๐ฌ๐ž๐ ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  on our new ๐„๐ง๐๐จ๐•๐‹๐€-๐Œ๐จ๐ญ๐ข๐จ๐ง dataset

2. ๐‘๐ž๐ข๐ง๐Ÿ๐จ๐ซ๐œ๐ž๐ฆ๐ž๐ง๐ญ ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  with task-aware rewards

This approach impressively boosts tracking accuracy and achieves zero-shot generalization across diverse GI scenes.

The paper is available at: https://lnkd.in/g35DF7Fq

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Excitedย to share that our paper “๐‚๐จ๐๐„๐’๐ƒ: ๐€ ๐Œ๐ฎ๐ฅ๐ญ๐ข-๐‹๐ž๐ฏ๐ž๐ฅ ๐’๐ฎ๐ซ๐ ๐ข๐œ๐š๐ฅ ๐Œ๐จ๐ญ๐ข๐จ๐ง ๐ƒ๐š๐ญ๐š๐ฌ๐ž๐ญ ๐Ÿ๐จ๐ซ ๐“๐ซ๐š๐ข๐ง๐ข๐ง๐  ๐‹๐š๐ซ๐ ๐ž ๐•๐ข๐ฌ๐ข๐จ๐ง-๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ ๐ญ๐จ ๐‚๐จ-๐๐ข๐ฅ๐จ๐ญ ๐„๐ง๐๐จ๐ฌ๐œ๐จ๐ฉ๐ข๐œ ๐’๐ฎ๐›๐ฆ๐ฎ๐œ๐จ๐ฌ๐š๐ฅ ๐ƒ๐ข๐ฌ๐ฌ๐ž๐œ๐ญ๐ข๐จ๐ง” has been accepted to ๐€๐‚๐Œ ๐Œ๐Œ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ (๐ƒ๐š๐ญ๐š๐ฌ๐ž๐ญ ๐“๐ซ๐š๐œ๐ค)!

We built ๐‚๐จ๐๐„๐’๐ƒ to help AI better understand surgical workflowsโ€”especially the complex motions involved in ๐„๐ง๐๐จ๐ฌ๐œ๐จ๐ฉ๐ข๐œ ๐’๐ฎ๐›๐ฆ๐ฎ๐œ๐จ๐ฌ๐š๐ฅ ๐ƒ๐ข๐ฌ๐ฌ๐ž๐œ๐ญ๐ข๐จ๐ง (๐„๐’๐ƒ). The dataset includes:

๐Ÿ“น 35+ hours of annotated surgical videos

๐Ÿ–ผ๏ธ 17,679 labeled frames

๐Ÿ” 88,395 motion annotations across multiple levels

We designed a hierarchical annotation scheme to capture fine-grained surgical motions, especially focusing on the submucosal dissection phase. Our goal is to enable ๐ฏ๐ข๐ฌ๐ข๐จ๐ง-๐ฅ๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐ฆ๐จ๐๐ž๐ฅ๐ฌ that can one day assist surgeons in real-timeโ€”like a smart co-pilot in the OR.

Thanks to all collaborators (Guankun Wang, Han Xiao, Huxin Gao, Renrui Zhang, Long Bai, Xiaoxiao Yang, Zhen Li, Hongsheng Li,

Hongliang Ren) and institutions (CUHK, Shanghai AI Lab, Qilu Hospital of SDU) involved. Weโ€™re excited to see how this dataset can push forward research in ๐ฌ๐ฎ๐ซ๐ ๐ข๐œ๐š๐ฅ ๐€๐ˆ, ๐ซ๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ, and ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฆ๐จ๐๐š๐ฅ ๐ฅ๐ž๐š๐ซ๐ง๐ข๐ง๐ .

๐Ÿ“„ Check out the paper: https://lnkd.in/gkF6A4QY

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Excited to share that our paper โ€œ๐‘…๐‘’๐‘กโ„Ž๐‘–๐‘›๐‘˜๐‘–๐‘›๐‘” ๐ท๐‘Ž๐‘ก๐‘Ž ๐ผ๐‘š๐‘๐‘Ž๐‘™๐‘Ž๐‘›๐‘๐‘’ ๐‘–๐‘› ๐ถ๐‘™๐‘Ž๐‘ ๐‘  ๐ผ๐‘›๐‘๐‘Ÿ๐‘’๐‘š๐‘’๐‘›๐‘ก๐‘Ž๐‘™ ๐‘†๐‘ข๐‘Ÿ๐‘”๐‘–๐‘๐‘Ž๐‘™ ๐ผ๐‘›๐‘ ๐‘ก๐‘Ÿ๐‘ข๐‘š๐‘’๐‘›๐‘ก ๐‘†๐‘’๐‘”๐‘š๐‘’๐‘›๐‘ก๐‘Ž๐‘ก๐‘–๐‘œ๐‘›โ€ has been accepted by ๐Œ๐ž๐๐ข๐œ๐š๐ฅ ๐ˆ๐ฆ๐š๐ ๐ž ๐€๐ง๐š๐ฅ๐ฒ๐ฌ๐ข๐ฌ (IF 11.8)!

In this work, we tackled the challenge of training models that can keep learning new surgical instruments over time, without forgetting the old ones. Data imbalance made this especially tricky, so we proposed a plug-and-play framework that balances the data using inpainting and blending techniques, and introduced a new loss function to reduce confusion between similar-looking tools.

Big thanks to our amazing team (Shifang Zhao, Long Bai, Kun Yuan, Feng Li, Jieming YU, Wenzhen Dong, Guankun Wang, Prof. Mobarak I. Hoque, Prof. Nicolas Padoy, Prof. Nassir Navab, Prof. Hongliang Ren) from CUHK, TUM, Strasbourg, and UCL. This collaboration truly brought together ideas from different corners of the world ๐ŸŒ

The paper is now online: https://lnkd.in/gemZFNUK Code coming soon ๐Ÿ‘จโ€๐Ÿ’ป

๐Ÿ“ข ๐‚๐š๐ฅ๐ฅ ๐Ÿ๐จ๐ซ ๐๐š๐ฉ๐ž๐ซ๐ฌ: ๐ˆ๐‚๐๐ˆ๐‘ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐€๐ฎ๐ ๐ฎ๐ฌ๐ญ ๐Ÿ๐Ÿ”โ€“๐Ÿ๐Ÿ–, ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ | ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, ๐†๐š๐ง๐ฌ๐ฎ, ๐‚๐ก๐ข๐ง๐š

Join us in the breathtaking landscapes of ๐™๐ก๐š๐ง๐ ๐ฒ๐ž, China for the ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐‚๐จ๐ง๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐จ๐ง ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (๐ˆ๐‚๐๐ˆ๐‘), an affiliated event of the ๐๐Ÿ Elsevier journal ๐๐ข๐จ๐ฆ๐ข๐ฆ๐ž๐ญ๐ข๐œ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐š๐ง๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ (IF 5.4).

We welcome original contributions covering:

โ€ข Biomimetic design, materials & actuation

โ€ข Bio-inspired sensing, perception & navigation

โ€ข Learning-based control & embodied AI

โ€ข Soft & adaptive robotics

โ€ข Novel real-world applications integrating theory and practice

All accepted papers will be published by Elsevier and indexed in EI & Scopus. Top-ranked submissions will earn best-paper awards and invitations to submit expanded versions to Biomimetic Intelligence and Robotics and other leading journals.

๐Š๐ž๐ฒ ๐ƒ๐š๐ญ๐ž๐ฌ โ€ข Full-Paper (or Short Abstract) submissions due โ†’ July 20, 2025 โ€ข Acceptance notifications โ†’ August 1, 2025 โ€ข Registration & final manuscript โ†’ August 10, 2025

Learn more & submit at โ–ถ๏ธ http://www.icbir.org

Letโ€™s decode natureโ€™s genius and engineer the next generation of intelligent machinesโ€”together! ๐ŸŒฟ๐Ÿค–

text

๐ŸŽ‰ We are honored to share that our labโ€™s paper, “PDZSeg: Adapting the Foundation Model for Dissection Zone Segmentation with Visual Prompts in Robot-Assisted Endoscopic Submucosal Dissection,” has been published at the International Journal of Computer Assisted Radiology and Surgery.

The paper is accepted for presentation at IPCAI2025, and weโ€™re especially humbled to receive the ๐—น๐—›๐—จ ๐—ฆ๐˜๐—ฟ๐—ฎ๐˜€๐—ฏ๐—ผ๐˜‚๐—ฟ๐—ด ๐—ฎ๐—ป๐—ฑ ๐—ก๐——๐—น ๐—•๐—ฒ๐—ป๐—ฐ๐—ต ๐˜๐—ผ ๐—•๐—ฒ๐—ฑ๐˜€๐—ถ๐—ฑ๐—ฒ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ: ๐—›๐—ผ๐—ป๐—ผ๐—ฟ๐—ฎ๐—ฏ๐—น๐—ฒ ๐— ๐—ฒ๐—ป๐˜๐—ถ๐—ผ๐—ป.

In this work, we address the challenge of accurately delineating the dissection zones during endoscopic submucosal dissection procedures. By integrating flexible visual cuesโ€”such as scribbles and bounding boxesโ€”directly onto surgical images, our PDZSeg model guides segmentation for both better precision and enhanced safety. Leveraging a state-of-the-art foundation model (DINOv2) and an efficient LoRA training strategy, we fine-tuned our approach on the specialized ESD-DZSeg dataset. Our experimental results show promising improvements over traditional methods, offering robust support for intraoperative guidance and remote surgical training.

Our sincere thanks to every colleague (Mengya Xu, Wenjin Mo, Guankun Wang, Huxin Gao, An Wang), mentor (Dr. Ning Zhong, Dr. Zhen Li, Dr. Xiaoxiao Yang, Prof. Hongliang Ren), and community member whose support has been indispensable. This achievement reaffirms our collective effort and inspires us to further refine robotic-assisted techniques towards enhanced safety and effectiveness.

Paper available at: https://lnkd.in/g7KcytnE

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Big Congratulations to Sishen YUAN on successfully defending his PhD thesis titled “๐‘€๐‘Ž๐‘”๐‘›๐‘’๐‘ก๐‘–๐‘ ๐‘€๐‘’๐‘‘๐‘–๐‘๐‘Ž๐‘™ ๐‘…๐‘œ๐‘๐‘œ๐‘ก๐‘ : ๐‘†๐‘ฆ๐‘ ๐‘ก๐‘’๐‘š ๐ท๐‘’๐‘ ๐‘–๐‘”๐‘›, ๐ถ๐‘œ๐‘›๐‘ก๐‘Ÿ๐‘œ๐‘™ ๐‘Ž๐‘›๐‘‘ ๐‘‡๐‘Ÿ๐‘Ž๐‘›๐‘ ๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™ ๐ด๐‘๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘ ”! ๐Ÿš€

Dr. Yuanโ€™s research advances the field of medical robotics, paving the way for innovative healthcare solutions. A well-deserved milestone after years of dedication and research!

Special thanks to his supervisor: Prof. Hongliang Ren and Prof. Max Q.-H. Meng, as well as the examining committee members: Prof. Gao Shichang, Prof. Wu ‘Scott’ YUAN, and Prof. Zhidong Wang for their invaluable guidance and support.

๐Ÿ”— Learn more about Dr. Yuanโ€™s research on https://lnkd.in/g7GVhwtv.

No alternative text description for this image
No alternative text description for this image
No alternative text description for this image

๐Ÿš€Excited to share our recent work accepted toย ๐€๐๐ฏ๐š๐ง๐œ๐ž๐ ๐‘๐จ๐›๐จ๐ญ๐ข๐œ๐ฌ ๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก

“๐ฟ๐‘œ๐‘ค-๐‘†๐‘ก๐‘Ÿ๐‘Ž๐‘–๐‘› ๐น๐‘™๐‘’๐‘ฅ๐‘–๐‘๐‘™๐‘’ ๐ผ๐‘›๐‘˜-๐ต๐‘Ž๐‘ ๐‘’๐‘‘ ๐‘†๐‘’๐‘›๐‘ ๐‘œ๐‘Ÿ๐‘  ๐ธ๐‘›๐‘Ž๐‘๐‘™๐‘’ ๐ป๐‘ฆ๐‘๐‘’๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ ๐‘ก๐‘–๐‘ ๐ผ๐‘›๐‘“๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘ƒ๐‘’๐‘Ÿ๐‘๐‘’๐‘๐‘ก๐‘–๐‘œ๐‘› ๐‘‰๐‘–๐‘Ž ๐บ๐‘’๐‘œ๐‘š๐‘’๐‘ก๐‘Ÿ๐‘–๐‘ ๐‘ƒ๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ๐‘›๐‘–๐‘›๐‘””

๐Ÿ“„ What we explored:

– Designed ฮฉ-shaped flexible sensors using conductive ink to reduce strain mismatches on inflatable robots

– Developed a light-curing transfer method for precise sensor attachment

– Tested integration with balloon-type robots showing improved deformation tracking at >300% expansion

๐Ÿฉบ Our sensor system has the potential to enable:

๐Ÿ”น Real-time inflation tracking for safer human-robot interaction

๐Ÿ”น Spatial perception in biomedical devices (catheters, surgical tools)

๐Ÿ”น 5x reduction in circumference/area error vs. conventional designs

Congrats to the authors: WENCHAO YUE, Shuoyuan Chen, Yan Ke, Yingyi Wen, Ruijie Tang, Guohua Hu, and Hongliang Ren.

๐Ÿ“„ Paper Open Access at: https://lnkd.in/gm9DRybP

No alternative text description for this image
No alternative text description for this image

๐ŸŽ‰ Congratulations to our Ph.D. candidate, Long Bai, for successfully defending his doctoral dissertation on June 3, 2025! ๐ŸŽ‰

We extend our sincere gratitude to Prof. Tan Lee, Prof. Qi Dou, and Prof. S. Kevin Zhou for serving as examiners during Long Bai’s defense. Special thanks to his supervisors, Prof. Hongliang Ren and Prof. Jiewen Lai, for their invaluable guidance throughout his Ph.D. journey.

During his time at CUHK RenLab, Dr. Bai has made impressive contributions to surgical and medical artificial intelligence, particularly in multimodal AI.

๐Ÿ”— For more details about his research, visit his personal website: longbai-cuhk.github.io.

Wishing Dr. Long Bai all the best in his future endeavors! ๐Ÿš€๐Ÿ‘

No alternative text description for this image
No alternative text description for this image