We are thrilled to share our latest Comment published in #NatureReviewsBioengineering: “๐๐ฟ๐๐ถ๐ณ๐ถ๐ฐ๐ถ๐ฎ๐น ๐ธ๐ถ๐ป๐ฎ๐ฒ๐๐๐ต๐ฒ๐๐ถ๐ฎ ๐ถ๐ป ๐ฎ๐๐๐ผ๐ป๐ผ๐บ๐ผ๐๐ ๐ฟ๐ผ๐ฏ๐ผ๐๐ถ๐ฐ ๐๐๐ฟ๐ด๐ฒ๐ฟ๐”.
Current autonomous surgical robots are ๐ต๐ฒ๐ฎ๐๐ถ๐น๐ ๐๐ถ๐๐ถ๐ผ๐ป-๐ฐ๐ฒ๐ป๐๐ฟ๐ถ๐ฐ. While they ๐ฐ๐ฎ๐ป “๐๐ฒ๐ฒ” anatomy, they ๐น๐ฎ๐ฐ๐ธ ๐๐ต๐ฒ ๐ถ๐ป๐๐ฟ๐ถ๐ป๐๐ถ๐ฐ ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ ๐๐ผ “๐ณ๐ฒ๐ฒ๐น” tissue interactionsโ๐ฎ ๐ฐ๐ฟ๐๐ฐ๐ถ๐ฎ๐น ๐๐ธ๐ถ๐น๐น ๐๐ต๐ฎ๐ ๐ต๐๐บ๐ฎ๐ป ๐๐๐ฟ๐ด๐ฒ๐ผ๐ป๐ ๐ฟ๐ฒ๐น๐ ๐ผ๐ป ๐ณ๐ผ๐ฟ ๐๐ฎ๐ณ๐ฒ๐๐ ๐ฎ๐ป๐ฑ ๐ฑ๐ฒ๐ ๐๐ฒ๐ฟ๐ถ๐๐.
In this article, we propose a hierarchical framework for ๐๐ฟ๐๐ถ๐ณ๐ถ๐ฐ๐ถ๐ฎ๐น ๐๐ถ๐ป๐ฎ๐ฒ๐๐๐ต๐ฒ๐๐ถ๐ฎ to bridge this gap:
1. ๐๐ง๐ต๐ฒ ๐ฃ๐ต๐๐๐ถ๐ฐ๐ฎ๐น ๐๐ฒ๐๐ฒ๐น: Integrating proprioception and exteroception for high-res physical sensing.
2. ๐ฌ๐ง๐ต๐ฒ ๐๐น๐ด๐ผ๐ฟ๐ถ๐๐ต๐บ๐ถ๐ฐ ๐๐ฒ๐๐ฒ๐น: Moving from raw signal processing to semantic understanding of contact.
3. ๐ง ๐ง๐ต๐ฒ ๐๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฎ๐น ๐๐ฒ๐๐ฒ๐น: Implementing Vision-Kinaesthesia-Language-Action models to achieve true sensorimotor synergy.
We believe the future of autonomous surgery lies in systems that can synergistically fuse vision and kinaesthesia to not just see, but truly feel, think, and act.
๐ Read the ๐ณ๐๐น๐น ๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ here: [https://lnkd.in/gqTEpYjs]
๐ Kudos to our amazing team, Dr. Tangyou Liu, Dr. Sishen YUAN, and Prof. Hongliang Ren.