Next: An MRI Application   Up: Main Page

Introduction

We are focusing on providing real-time learning and trace-ahead capabilities for region definition in image analysis tasks. In current medical image analysis, the reference standard for region delineation is an expert's manual outlining of the region. In certain domains, such as tumor identification, automatic delineation has made some modest success (for example [7]). However, as Johnson, et.al. [4], note: ``Although image segmentation and contour/edge detections have been investigated for quite a long time, there is still no algorithm that can automatically find region boundaries perfectly from clinically obtained medical images. There are two reasons for this. One is that most of the image segmentation algorithms are still noise sensitive. The second reason is that most segmentation tasks require certain background knowledge about the region(s) of interest.''

Our model here is that a human expert sets down the initial several pixels of an image boundary, and a neural network continues the task by learning the local landscape and continuing through similar image territory as originally identified. One characteristic of neural nets is an adaptability to noise, and thus if the initial image territory is noisy, the network could learn to navigate through it, addressing the first concern above. In addressing the second concern, we note that hole-scene analysis, a straightforward task for a human expert, has proved exceedingly difficult to automate. The expert/network combination we set forward capitalizes on what each does best: the expert to provide global perspective and context, and the network to quickly analyze and work through similar local neighborhoods.

We have focused on neural networks as the learning mechanism due to their very general abilities. In earlier studies, we demonstrated their facility in learning non-linear region discriminations[1].



Next: An MRI Application   Up: Main Page


stewart crawford-hines