Enhancing Bone Segmentation in Ultrasound Imaging Using Physics-Informed Deep Learning Models | |
Computer-Assisted Orthopedic Surgery (CAOS) has been demonstrated to improve surgical precision in various procedures, including spinal fusion surgery, arthroplasty, and bone deformity correction [1,2]. Ultrasound, as a radiation-free, cost-effective, and portable alternative to CT and X-ray imaging, has been employed for real-time visualization of both soft tissues and bones through the reflection of acoustic waves. Despite its advantages, ultrasound imaging has inherent limitations such as low signal-to-noise ratio, acoustic shadowing, and speckle noise, which pose challenges for interpretation by surgeons. In our project, we have collected a dataset comprising over 100k ultrasound images with precise bone annotations. These bone labels are categorized into two classes: high-intensity regions (high signal-to-noise ratio) and low-intensity regions (low signal-to-noise ratio), as shown in Figure 1. According to experiment results, surgeons’ performance for bone labeling for low-intensity regions declined significantly compared to the high-intensity regions. [1] Pandey, Prashant U., et al. "Ultrasound bone segmentation: A scoping review of techniques and validation practices." Ultrasound in Medicine & Biology 46.4 (2020): 921-935. [2] Hohlmann, Benjamin, Peter Broessner, and Klaus Radermacher. "Ultrasound-based 3D bone modelling in computer assisted orthopedic surgery–a review and future challenges." Computer Assisted Surgery 29.1 (2024): 2276055. | |
Exploring a Higher Level of Autonomy for Robotic-Assisted Microsurgery with Deep Reinforcement Learning | |
Robotic-assisted microsurgery has gained significant attention in recent years, particularly with the development of specialized systems like the Symani® Surgical System, designed for procedures such as microanastomoses of blood vessels. While these systems are fully controlled by surgeons, they are subject to variability due to differences in individual skill levels. Autonomous robotic surgery systems offer the potential to overcome these limitations by delivering enhanced precision, efficiency, and consistency compared to surgeon-controlled techniques. However, modeling complex procedures like microanastomoses presents significant challenges, making it difficult to apply traditional model-based approaches for autonomous control. In this project, we aim to investigate the use of deep reinforcement learning (RL) for autonomous robotic microanastomoses, leveraging the recently introduced Orbit-surgical training platform. | |
3D Surface Reconstruction from Sparse Viewpoints for Medical Education and Surgical Navigation | |
In medical education and surgical navigation, achieving accurate multi-view 3D surface reconstruction from sparse viewpoints is a critical challenge. This Master's thesis addresses this problem by first computing normal and optionally reflectance maps for each viewpoint, and then fusing this data to obtain the geometry of the scene and, optionally, its reflectance. The research explores multiple techniques for normal map computation, including photometric stereo, data-driven methods, and stereo matching, either individually or in combination. The outcomes of this study aim to pave the way for the creation of highly realistic and accurate 3D models of surgical fields and anatomical structures. These models have the potential to significantly improve medical education by providing detailed and interactive representations for learning. Additionally, in the context of surgical navigation, these advancements can enhance the accuracy and effectiveness of surgical procedures. References: Yu, Zehao, Peng, Songyou, Niemeyer, Michael, Sattler, Torsten, Geiger, Andreas. MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction. NeurIPS 2022. Baptiste Brument and Robin Bruneau and Yvain Quéau and Jean Mélou and François Lauze and Jean-Denis Durou and Lilian Calvet. RNb-Neus: Reflectance and normal Based reconstruction with NeuS. CVPR 2024. Gwangbin Bae and Andrew J. Davison. Rethinking Inductive Biases for Surface Normal Estimation. CVPR 2024. | |
Enhancing 3D Reconstruction and Tracking of Anatomy for Open Orthopedic Surgery | |
Computer-assisted interventions have advanced significantly with computer vision, improving tasks like surgical navigation and robotics. While marker-based navigation systems have increased accuracy and reduced revision rates, their technical limitations hinder integration into surgical workflows. This master thesis proposes using the OR-X research infrastructure to collect datasets of human anatomies with 3D ground truth under realistic surgical conditions. The project will evaluate state-of-the-art 3D reconstruction and tracking methods and adapt them to the orthopedic image domain, focusing on a promising marker-less optical camera-based approach for spine surgery. This work aims to enhance precision and integration in surgical navigation systems. | |
Advancing Camera Localization in Surgical Environments | |
OR-X (https://or-x.ch) is an innovative research infrastructure replicating an operating theater, equipped with an extensive array of cameras. This setup enables the collection of comprehensive datasets through densely positioned cameras, capturing detailed surgical scenes. A key challenge addressed in this master thesis is the computation of camera positions and orientations for dynamic egocentric views, such as those from head-mounted displays or GoPro cameras. Solving this issue can significantly impact applications in automatic documentation, education, surgical navigation, and robotic surgery. |
Powered by SiROP - the academic career network