{"id":3447,"date":"2026-02-17T01:11:57","date_gmt":"2026-02-17T01:11:57","guid":{"rendered":"http:\/\/www.labren.org\/mm\/?p=3447"},"modified":"2026-02-17T01:11:58","modified_gmt":"2026-02-17T01:11:58","slug":"%f0%9f%9a%80-icra-2026-%f0%9d%91%ac%f0%9d%92%8f%f0%9d%92%85%f0%9d%92%90%f0%9d%91%ab%f0%9d%91%ab%f0%9d%91%aa-%f0%9d%91%b3%f0%9d%92%86%f0%9d%92%82%f0%9d%92%93%f0%9d%92%8f%f0%9d%92%8a%f0%9d%92%8f","status":"publish","type":"post","link":"http:\/\/www.labren.org\/mm\/news\/%f0%9f%9a%80-icra-2026-%f0%9d%91%ac%f0%9d%92%8f%f0%9d%92%85%f0%9d%92%90%f0%9d%91%ab%f0%9d%91%ab%f0%9d%91%aa-%f0%9d%91%b3%f0%9d%92%86%f0%9d%92%82%f0%9d%92%93%f0%9d%92%8f%f0%9d%92%8a%f0%9d%92%8f\/","title":{"rendered":"\ud83d\ude80 ICRA 2026: \ud835\udc6c\ud835\udc8f\ud835\udc85\ud835\udc90\ud835\udc6b\ud835\udc6b\ud835\udc6a: \ud835\udc73\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc8f\ud835\udc8a\ud835\udc8f\ud835\udc88 \ud835\udc7a\ud835\udc91\ud835\udc82\ud835\udc93\ud835\udc94\ud835\udc86 \ud835\udc95\ud835\udc90 \ud835\udc6b\ud835\udc86\ud835\udc8f\ud835\udc94\ud835\udc86 \ud835\udc79\ud835\udc86\ud835\udc84\ud835\udc90\ud835\udc8f\ud835\udc94\ud835\udc95\ud835\udc93\ud835\udc96\ud835\udc84\ud835\udc95\ud835\udc8a\ud835\udc90\ud835\udc8f \ud835\udc87\ud835\udc90\ud835\udc93 \ud835\udc6c\ud835\udc8f\ud835\udc85\ud835\udc90\ud835\udc94\ud835\udc84\ud835\udc90\ud835\udc91\ud835\udc8a\ud835\udc84 \ud835\udc79\ud835\udc90\ud835\udc83\ud835\udc90\ud835\udc95\ud835\udc8a\ud835\udc84 \ud835\udc75\ud835\udc82\ud835\udc97\ud835\udc8a\ud835\udc88\ud835\udc82\ud835\udc95\ud835\udc8a\ud835\udc90\ud835\udc8f \ud835\udc97\ud835\udc8a\ud835\udc82 \ud835\udc6b\ud835\udc8a\ud835\udc87\ud835\udc87\ud835\udc96\ud835\udc94\ud835\udc8a\ud835\udc90\ud835\udc8f \ud835\udc6b\ud835\udc86\ud835\udc91\ud835\udc95\ud835\udc89 \ud835\udc6a\ud835\udc90\ud835\udc8e\ud835\udc91\ud835\udc8d\ud835\udc86\ud835\udc95\ud835\udc8a\ud835\udc90\ud835\udc8f \ud83e\udd16"},"content":{"rendered":"\n<p>Thrilled to share our latest work on enabling robust sparse-to-dense reconstruction for endoscopic surgical robots \u2014 bridging the gap between \ud835\udc2c\ud835\udc29\ud835\udc1a\ud835\udc2b\ud835\udc2c\ud835\udc1e \ud835\udc2c\ud835\udc1e\ud835\udc27\ud835\udc2c\ud835\udc28\ud835\udc2b \ud835\udc1d\ud835\udc1a\ud835\udc2d\ud835\udc1a \ud835\udc1a\ud835\udc27\ud835\udc1d \ud835\udc21\ud835\udc22\ud835\udc20\ud835\udc21-\ud835\udc2a\ud835\udc2e\ud835\udc1a\ud835\udc25\ud835\udc22\ud835\udc2d\ud835\udc32 \ud835\udfd1\ud835\udc03 \ud835\udc26\ud835\udc1a\ud835\udc29\ud835\udc29\ud835\udc22\ud835\udc27\ud835\udc20 using a novel \ud835\udc1d\ud835\udc22\ud835\udc1f\ud835\udc1f\ud835\udc2e\ud835\udc2c\ud835\udc22\ud835\udc28\ud835\udc27-\ud835\udc1b\ud835\udc1a\ud835\udc2c\ud835\udc1e\ud835\udc1d framework.<\/p>\n\n\n\n<p>Fine-tuning foundational models often fails due to a lack of dense ground truth, and self-supervised methods struggle with scale ambiguity, sparse depth sensors offer a reliable geometric prior.<\/p>\n\n\n\n<p>This motivated us to develop EndoDDC, a method that robustly generates dense depth maps by fusing RGB images with sparse depth inputs.<\/p>\n\n\n\n<p>\ud83e\udde0\u2728 \ud835\udc16\ud835\udc21\ud835\udc1a\ud835\udc2d \ud835\udc30\ud835\udc1e \ud835\udc1d\ud835\udc1e\ud835\udc2f\ud835\udc1e\ud835\udc25\ud835\udc28\ud835\udc29\ud835\udc1e\ud835\udc1d:<\/p>\n\n\n\n<p>A diffusion-driven depth completion architecture that:<\/p>\n\n\n\n<p>\ud83d\udd39 Integrates sparse depth and RGB inputs to overcome the limitations of pure visual estimation.<\/p>\n\n\n\n<p>\ud83d\udd39 Utilizes a Multi-scale Feature Extraction and Depth Gradient Fusion module to capture fine-grained surface orientation and local structure.<\/p>\n\n\n\n<p>\ud83d\udd39 Optimizes depth maps iteratively using a conditional diffusion model, refining geometry even in regions with weak textures or reflections.<\/p>\n\n\n\n<p>\ud83c\udfaf \ud835\udc0a\ud835\udc1e\ud835\udc32 \ud835\udc11\ud835\udc1e\ud835\udc2c\ud835\udc2e\ud835\udc25\ud835\udc2d\ud835\udc2c:<\/p>\n\n\n\n<p>\u2705 25.55% and 9.03% improvement in accuracy on the StereoMIS and C3VD dataset compared to SOTA surgical estimators like EndoDAC.<\/p>\n\n\n\n<p>\u2705 7.35% and 5.28% reduction in RMSE on StereoMIS and C3VD compared to the best depth completion baseline (OGNI-DC).<\/p>\n\n\n\n<p>\u2705 Outperformed foundational models (DepthAnything-v2) and standard depth completion (Marigold-DC) methods in both accuracy and robustness.<\/p>\n\n\n\n<p>\ud83d\udca1 \ud835\udc16\ud835\udc21\ud835\udc32 \ud835\udc22\ud835\udc2d \ud835\udc26\ud835\udc1a\ud835\udc2d\ud835\udc2d\ud835\udc1e\ud835\udc2b\ud835\udc2c:<\/p>\n\n\n\n<p>This work demonstrates that diffusion models can effectively solve the &#8220;sparse-to-dense&#8221; challenge in medical imaging. By providing accurate depth completion despite complex lighting and texture conditions, EndoDDC has the potential to significantly enhance autonomous navigation, procedural safety, and spatial awareness in minimally invasive surgery.<\/p>\n\n\n\n<p>\ud83d\udd16 <strong>#DepthCompletion<\/strong> <strong>#DiffusionModel<\/strong> <strong>#EndoscopicSurgery<\/strong> <strong>#SurgicalNavigation<\/strong>&nbsp;<strong>#ICRA<\/strong> <strong>#CUHKEngineering<\/strong> <strong>#CUHK<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/media.licdn.com\/dms\/image\/v2\/D5622AQEMdB6nkD3TEA\/feedshare-shrink_800\/B56ZxpIbugHsAg-\/0\/1771290344607?e=1772668800&amp;v=beta&amp;t=zr88fal1NvVJarzr20kCWpHSo8uOcAf1p0bY_ae28PI\" alt=\"No alternative text description for this image\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/media.licdn.com\/dms\/image\/v2\/D5622AQHJGXtRl6w1cQ\/feedshare-shrink_800\/B56ZxpIbpZKIAg-\/0\/1771290344192?e=1772668800&amp;v=beta&amp;t=PNgdEEkTjpJjDG0fvfueV-EqvaHgjE6_WNMW2k5KY4w\" alt=\"No alternative text description for this image\" \/><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Thrilled to share our latest work on enabling robust sparse-to-dense reconstruction for endoscopic surgical robots \u2014 bridging the gap between \ud835\udc2c\ud835\udc29\ud835\udc1a\ud835\udc2b\ud835\udc2c\ud835\udc1e \ud835\udc2c\ud835\udc1e\ud835\udc27\ud835\udc2c\ud835\udc28\ud835\udc2b \ud835\udc1d\ud835\udc1a\ud835\udc2d\ud835\udc1a \ud835\udc1a\ud835\udc27\ud835\udc1d \ud835\udc21\ud835\udc22\ud835\udc20\ud835\udc21-\ud835\udc2a\ud835\udc2e\ud835\udc1a\ud835\udc25\ud835\udc22\ud835\udc2d\ud835\udc32 \ud835\udfd1\ud835\udc03 \ud835\udc26\ud835\udc1a\ud835\udc29\ud835\udc29\ud835\udc22\ud835\udc27\ud835\udc20 using a novel \ud835\udc1d\ud835\udc22\ud835\udc1f\ud835\udc1f\ud835\udc2e\ud835\udc2c\ud835\udc22\ud835\udc28\ud835\udc27-\ud835\udc1b\ud835\udc1a\ud835\udc2c\ud835\udc1e\ud835\udc1d framework. Fine-tuning foundational models often fails due to a lack of dense ground truth, and self-supervised methods struggle with\u2026 <a class=\"continue-reading-link\" href=\"http:\/\/www.labren.org\/mm\/news\/%f0%9f%9a%80-icra-2026-%f0%9d%91%ac%f0%9d%92%8f%f0%9d%92%85%f0%9d%92%90%f0%9d%91%ab%f0%9d%91%ab%f0%9d%91%aa-%f0%9d%91%b3%f0%9d%92%86%f0%9d%92%82%f0%9d%92%93%f0%9d%92%8f%f0%9d%92%8a%f0%9d%92%8f\/\">Continue reading<\/a><\/p>\n","protected":false},"author":17,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3447","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts\/3447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/users\/17"}],"replies":[{"embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/comments?post=3447"}],"version-history":[{"count":1,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts\/3447\/revisions"}],"predecessor-version":[{"id":3448,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts\/3447\/revisions\/3448"}],"wp:attachment":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/media?parent=3447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/categories?post=3447"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/tags?post=3447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}