文摘
Intensity inhomogeneity represents a significant challenge in image processing. Popular image segmentation algorithms produce inadequate results in images with intensity inhomogeneity. Existing correction methods are often computationally expensive. Therefore, efficient implementations for the bias field estimation and inhomogeneity correction are required. In this work, we propose an extended mask-based version of the levelset method, recently presented by Li et al. [1]. We develop efficient CUDA implementations for the original full domain and the extended mask-based versions. We compare the methods in terms of speed, efficiency, and performance. Magnetic resonance (MR) images are one of the main application in practice.