Towards ultrasound as a tomographic imaging modality (Montag, 13.30 Uhr, Schinkelsaal)
|
Medical ultrasound has traditionally played a special role as hand-held dynamic imaging modality showing only a limited view of the anatomy at a time. However with the help of either tracking hardware or image analysis, it is possible to make it a more reproducible, large-scale imaging modality in addition to its other unique advantages. In this work, we aim to both create whole organ imaging from real-time 3D ultrasound, as well as allow for fusion with MRI, which we evaluate on liver data of eight volunteers.
First, an automatic image-based multi-volume registration procedure is used to create an extended field-of-view volume from the overlapping data of ultrasound acquisitions, which yields minimal drift and achieves real-time performance via implementation on graphics hardware. The registered volumes are compounded into a single large volume by using the median of the available intensity values at each voxel.
If other imaging data such as a prior MRI scan of the patient is available, the ultrasound data shall be automatically registered to it. For that purpose, we first need to find a coarse but robust estimate of its geometric transformation to the MRI scan. We learn the local appearance of the diaphragm, which is well visible in both modalities, using the Random Forests framework. One forest is learnt for each modality; in both cases, the features used are the intensity, the gradients and the Laplacian of the pixel and its neighborhood after smoothing at three different scales. The initial registration is then found as the rigid transformation that minimizes the sum of the Euclidean distances of all the detected diaphragm points in the MRI image to the US probability map, using a global optimizer. This is followed by an automatic image-based multi-modal registration, which uses the 3D LC2 similarity metric, and was tested with rigid, affine and cubic spline deformable transformation models.
The overall system was evaluated on abdominal data from eight volunteers. Ultrasound scans were acquired with a GE Vivid E9 machine with a 4V matrix probe, the MRI scans were T1 Dixon sequences on a Siemens Magnetom Avanto 1.5T scanner. We achieve automatic reconstruction and registration without any user interaction within an overall computation time of 1-2 minutes. An assessment of the registration errors based on physician-defined landmarks yields errors in the order of magnitude of ~1cm, albeit with a high estimated landmark localization error of 2-7mm.
We believe that the presented methods pave the way towards fully automatic multi-modal integration of 3D ultrasound with other image data, without the need to use external tracking hardware. This should further strengthen the emerging role of ultrasound as quasi-tomographic imaging modality, paired with its flexibility allowing for intra-operative use.
Dr. Wolfgang Wein
ImFusion GmbH, TU München