The use of ultrasound (US) imaging is widespread as a method for medical diagnosis as it enables real-time visualisation of internal body structures, such as internal organs, vessels, tendons or joints, as well as fetal imaging during pregnancy. However, one of the challenges of extracting relevant clinical information from US scans is the dependency of the image acquisition and clinical evaluation on a skilled operator.
The aim of this project is to use statistical analysis of the dynamics of ultrasound image texture to develop a generative model for categorizing video sequences in 2D+t ultrasound images. In clinical practice, this could enable automatized organ identification by defining efficient computational schemes, and the aim would be to make this process as independent from pose, viewpoint and individual differences in organ anatomy as possible. Ultimately, the overall goal would be to implement a rapid and robust spatio-temporal organ identification method. Additionally, as US imaging is highly dependent on the skills of the operator, a secondary objective would be to evaluate the operator skills from the dynamics of the images.
To achieve real-time recognition of various organs in ultrasound (US) 2D+t videos, the project will rely on scale- and rotation-invariant approaches and will exploit the spatio-temporal structure of the video signal. Ultrasound video databases will serve as reference training datasets. The tools developed in the course on this project will rely on advanced signal processing methods such as wavelet and scattering transforms, statistical methods such as classification and codeword generation, machine learning such as deep scattering networks and methods from dynamic systems such as state-space analysis. The idea of this research is to learn and extract an invariant lower dimensional representation of the video using the dynamics of the local texture (the temporal structure of the spatial variations in the US signal). By comparing these local texture patterns with different previously attained US video samples of different organs, organ localisation will be achieved. Such a comparison could be a direct one as done in content-based video retrieval approaches or could be indirectly performed through learning of classifiers.
From a scientific point of view, this research topic is highly timely and relevant as it introduces recent well-posed methods from the machine learning and statistical signal processing communities to the medical imaging one. To the best of our knowledge, while deep learning methods are now commonly used in the medical imaging community, this is not the case for the more recent and more interpretable scattering transform including deep scattering networks.
From an application point of view, given the fact that 2D+t ultrasound is a very prevalent imaging modality, being accessible to the public using portable and affordable equipment, improvement in the autonomous analysis of ultrasound images will have a significant influence on public health monitoring. This project is synergetic with the Wellcome Trust and EPSRC funded GIFT-Surg project for fetal therapy and surgery led by UCL. We envision that in this project, the outcome of this research project will help increase the chances of early diagnosis of a variety of fetal developmental abnormalities and disorders.
Fully Funded 4 year MRes+PhD project available to EU and UK students
This position is due to start 26th September 2016
To apply please send a CV and expression of interest to Dr Vercauteren: