Improving SLAM in the Operating Room Using Event Cameras
This project focuses on enhancing SLAM (Simultaneous Localization and Mapping) in operating rooms using event cameras, which outperform traditional cameras in dynamic range, motion blur, and temporal resolution. By leveraging these capabilities, the project aims to develop a robust, real-time SLAM system tailored for surgical environments, addressing challenges like high-intensity lighting and head movement-induced motion blur.
Intraoperative Ultrasound-X-ray Calibration
Real-time navigation of complex orthopedic surgeries faces challenges due to the dynamic surgical environment and limited visibility of patient anatomy. Intraoperative imaging modalities such as ultrasound and X-ray are commonly used to achieve some level of guidance, although often in a purely visual form[1, 2]. Ultrasound provides high-frequency, radiation-free imaging but is limited to localized areas and is prone to noise [3]. X-ray, on the other hand, offers a wider field of view with less noise but introduces radiation, restricting the number of images that can be safely captured during a typical surgery. Combining ultrasound and X-ray data could potentially balance these strengths, enhancing intraoperative anatomical reconstruction quality while reducing radiation exposure, something vital for achieving surgical navigation. However, to our knowledge, no existing setup or dataset currently integrates both modalities for this purpose. This project focuses on developing a setup that enables sensor fusion of ultrasound and X-ray images to improve intraoperative surgical navigation. Alongside hardware setup, as is shown in Fig.1, a key objective is to establish a practical calibration method between an ultrasound probe and a C-arm X-ray machine. This will lay the foundation for creating a paired X-ray-Ultrasound dataset that can enable many downstream applications involving the said modalities. The final goal is to explore novel calibration techniques and system configurations that balance calibration accuracy with setup simplicity, facilitating efficient collection of joint ultrasound and X-ray datasets.
Exploring a Higher Level of Autonomy for Robotic-Assisted Microsurgery with Deep Reinforcement Learning
Robotic-assisted microsurgery has gained significant attention in recent years, particularly with the development of specialized systems like the Symani® Surgical System, designed for procedures such as microanastomoses of blood vessels. While these systems are fully controlled by surgeons, they are subject to variability due to differences in individual skill levels. Autonomous robotic surgery systems offer the potential to overcome these limitations by delivering enhanced precision, efficiency, and consistency compared to surgeon-controlled techniques. However, modeling complex procedures like microanastomoses presents significant challenges, making it difficult to apply traditional model-based approaches for autonomous control. In this project, we aim to investigate the use of deep reinforcement learning (RL) for autonomous robotic microanastomoses, leveraging the recently introduced Orbit-surgical training platform.

Powered by  SiROP - the academic career network