{"id":3471,"date":"2026-04-09T08:45:28","date_gmt":"2026-04-09T08:45:28","guid":{"rendered":"http:\/\/www.labren.org\/mm\/?p=3471"},"modified":"2026-04-09T08:45:29","modified_gmt":"2026-04-09T08:45:29","slug":"%f0%9f%9a%80-nvidia-gtc-2026-open-h-embodiment-the-worlds-first-and-largest-open-source-medical-robotics-dataset","status":"publish","type":"post","link":"http:\/\/www.labren.org\/mm\/news\/%f0%9f%9a%80-nvidia-gtc-2026-open-h-embodiment-the-worlds-first-and-largest-open-source-medical-robotics-dataset\/","title":{"rendered":"\ud83d\ude80\u00a0NVIDIA GTC 2026: Open-H-Embodiment \u2014 The World&#8217;s First and Largest Open-Source Medical Robotics Dataset"},"content":{"rendered":"\n<p>Thrilled to share our latest international collaboration! At NVIDIA GTC 2026 in San Jose, CA, the team led by&nbsp;<strong>Professor Hongliang Ren from The Chinese University of Hong Kong (CUHK)<\/strong>, in partnership with NVIDIA and 35 leading global institutions, officially released&nbsp;<strong>Open-H-Embodiment<\/strong>, the world\u2019s first and largest open-source dataset for medical robotics, now available on HuggingFace.<\/p>\n\n\n\n<p>During the GTC keynote, Kimberly Powell, NVIDIA\u2019s VP of Healthcare, highlighted this milestone. Our lab is honored to be a primary contributor, filling the critical gap in Embodied AI for medical robotics by providing high-fidelity data for contact dynamics and closed-loop control.<\/p>\n\n\n\n<p>\ud83e\udde0\u2728&nbsp;<strong>What we contributed &amp; developed:<\/strong><\/p>\n\n\n\n<p>This project breaks the &#8220;perception-heavy, execution-light&#8221; limitation of traditional medical AI. Key highlights include:<\/p>\n\n\n\n<p>\ud83d\udd39&nbsp;<strong>778 Hours of Massive Multimodal Data:<\/strong>&nbsp;The dataset covers 400 complete clinical surgeries and 9 major robotic platforms (e.g., dVRK, CMR Versius, Kuka). It includes 65% clinical data, 23% bench-top experiments, and 12% simulation data.<\/p>\n\n\n\n<p>\ud83d\udd39&nbsp;<strong>Three High-Value Specialized Datasets from Our Lab:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Dual-Source Ultrasound Dataset:<\/strong>\u00a0Experts-level trajectories covering in-vivo porcine EUS and human forearm scanning, overcoming complex organ environments and multi-device calibration.<\/li>\n\n\n\n<li><strong>Robotic Surgery Skill Dataset:<\/strong>\u00a0Multi-modal data (RGB\/RGB-D + Kinematics) for tissue manipulation and suturing, featuring millisecond-level synchronization and dual-mode control (teleoperation &amp; automation).<\/li>\n\n\n\n<li><strong>Flexible Endoscope Tracking Baseline:<\/strong>\u00a0A standardized dataset addressing hysteresis and deformation in flexible endoscopy, supporting nanosecond-level time synchronization.<\/li>\n<\/ul>\n\n\n\n<p>\ud83d\udd39&nbsp;<strong>Surgical VLA &amp; World Models:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GR00T-H:<\/strong>\u00a0A 3B-parameter Vision-Language-Action model based on NVIDIA Isaac GR00T, capable of long-horizon dexterous tasks like end-to-end suturing.<\/li>\n\n\n\n<li><strong>Cosmos-H-Surgical-Simulator:<\/strong>\u00a0An action-conditioned world model that boosts simulation efficiency by over 70x, bridging the sim-to-real gap.<\/li>\n<\/ul>\n\n\n\n<p>\ud83c\udfaf&nbsp;<strong>Key Results:<\/strong>&nbsp;\u2705&nbsp;<strong>Global Standardization:<\/strong>&nbsp;First effort to unify medical robotic data across different devices and institutions under CC-BY-4.0. \u2705&nbsp;<strong>Efficiency Boost:<\/strong>&nbsp;Accelerated surgical simulation (600 sims in 40 mins) to generate high-fidelity video-action pairs. \u2705&nbsp;<strong>Clinical Relevance:<\/strong>&nbsp;Successfully captured nearly 500 hours of real-world clinical data for hernia, gallbladder, and uterine surgeries.<\/p>\n\n\n\n<p>\ud83d\udca1&nbsp;<strong>Why it matters:<\/strong>&nbsp;This initiative provides the foundational &#8220;bedrock&#8221; for Medical Physical AI. By sharing high-quality, synchronized data for surgery, ultrasound, and endoscopy, we are lowering the barrier for researchers worldwide to develop autonomous surgical agents that are both explainable and adaptive.<\/p>\n\n\n\n<p>\ud83c\udf31&nbsp;<strong>What\u2019s next?<\/strong>&nbsp;Our lab is continuing to deepen research in: \ud83d\udd39&nbsp;<strong>Reasoning-based autonomous control<\/strong>&nbsp;for surgical robots. \ud83d\udd39&nbsp;<strong>Cross-platform generalization<\/strong>&nbsp;of Medical VLA models. \ud83d\udd39&nbsp;<strong>Clinical translation<\/strong>&nbsp;of Embodied AI to improve patient outcomes.<\/p>\n\n\n\n<p><strong>Datasets address:<\/strong> <a href=\"https:\/\/huggingface.co\/datasets\/nvidia\/PhysicalAI-Robotics-Open-H-Embodiment\">https:\/\/huggingface.co\/datasets\/nvidia\/PhysicalAI-Robotics-Open-H-Embodiment<\/a><\/p>\n\n\n\n<p><strong>Project website:<\/strong> <a href=\"https:\/\/github.com\/open-h\">https:\/\/github.com\/open-h<\/a><\/p>\n\n\n\n<p>#NVIDIAGTC2026 #MedicalRobotics #EmbodiedAI #HuggingFace #CUHK #OpenSource #HealthcareInnovation<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u5fae\u4fe1\u56fe\u7247_2026-04-09_164257_348-scaled.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"768\" height=\"1024\" src=\"http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u5fae\u4fe1\u56fe\u7247_2026-04-09_164257_348-768x1024.jpg\" alt=\"\" class=\"wp-image-3474\" srcset=\"http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u5fae\u4fe1\u56fe\u7247_2026-04-09_164257_348-768x1024.jpg 768w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u5fae\u4fe1\u56fe\u7247_2026-04-09_164257_348-225x300.jpg 225w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u5fae\u4fe1\u56fe\u7247_2026-04-09_164257_348-scaled.jpg 1920w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20.png\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"771\" src=\"http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20-1024x771.png\" alt=\"\" class=\"wp-image-3472\" srcset=\"http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20-1024x771.png 1024w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20-300x226.png 300w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20-768x578.png 768w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20-465x350.png 465w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20-150x113.png 150w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20-369x278.png 369w, http:\/\/www.labren.org\/mm\/wp-content\/uploads\/2026\/04\/\u622a\u5c4f2026-04-09-16.41.20.png 1288w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Thrilled to share our latest international collaboration! At NVIDIA GTC 2026 in San Jose, CA, the team led by&nbsp;Professor Hongliang Ren from The Chinese University of Hong Kong (CUHK), in partnership with NVIDIA and 35 leading global institutions, officially released&nbsp;Open-H-Embodiment, the world\u2019s first and largest open-source dataset for medical robotics,\u2026 <a class=\"continue-reading-link\" href=\"http:\/\/www.labren.org\/mm\/news\/%f0%9f%9a%80-nvidia-gtc-2026-open-h-embodiment-the-worlds-first-and-largest-open-source-medical-robotics-dataset\/\">Continue reading<\/a><\/p>\n","protected":false},"author":17,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-3471","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts\/3471","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/users\/17"}],"replies":[{"embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/comments?post=3471"}],"version-history":[{"count":1,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts\/3471\/revisions"}],"predecessor-version":[{"id":3475,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/posts\/3471\/revisions\/3475"}],"wp:attachment":[{"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/media?parent=3471"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/categories?post=3471"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.labren.org\/mm\/wp-json\/wp\/v2\/tags?post=3471"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}