The ArthroNav Project

Computer Assisted Navigation in Orthopedic Surgery using Endoscopic Images

Structure-from-motion

When a sufficient number of image features are matched across a sequence of arthroscopic frames, it is possible to establish geometric relations between their image locations and the motion trajectory of the arthroscope. These relations are already explored to perform accurate motion estimation of standard digital cameras and build a 3D reconstruction of the viewed scene.

In order to apply these methods to arthroscopic images there are specific issues that should be tackled. When dealing with medical imagery, it is usually very difficult to obtain a high quantity of reliable features that can act as localization landmarks and hence be used to estimate the camera motion. The development of sRD-SIFT allowed to obtain a higher number of features in this type of images, providing sufficient input data to use offline Structure-from-Motion algorithms and obtain dense 3D reconstructions of the scene viewed by an arthroscope, as shown in the following video.

3D reconstruction using SFM



Although good reconstructions can be obtained by offline methods, the available set of features is still not reliable enough to perform real-time localization, which is essential for assisted navigation during surgery. The Optotracker can provide the additional information needed to accurately estimate motion in real-time. The biggest challenge in integrating Optotracker and visual information to obtain camera motion is to estimate the relative pose between the arthroscope optical center and the optotracker sensor. This problem, known as Hand-Eye calibration, is currently solved by an offline procedure which can be done together with intrinsic camera calibration. However, this method is not accurate enough for arthroscopic cameras, and does not account for changes in the Hand-Eye transformation during operation, which can happen due to lens rotation and physical distortion of the arthroscopic probe. To properly solve this problem, it is required to perform Hand-Eye calibration and Structure-from-Motion simultaneously in a single framework that models the complete system. The development of a new method to accomplish this task is subject to undergoing PhD research.