Date of Award

May 2021

Document Type


Degree Name

Doctor of Philosophy (PhD)


School of Computing

Committee Member

Ronald Gimbel

Committee Member

Alexander Alekseyenko

Committee Member

Brian Dean

Committee Member

Moonseong Heo

Committee Member

Lior Rennert


Clinical evaluation of cancer therapeutics often involves a series of measurements of multiple tumor diameters. While a growing number of research studies have reported inter-observer variability in computed tomographic (CT) measurements among radiologists, there are very few interventional studies performed to reduce the variability. Furthermore, it remains unclear whether use of conventional statistical measures in evaluating the CT measurement variability is appropriate. A data-mining tool that can extract the tumor burden information from raw CT images has the potential to assist radiologists in reducing inter-observer variability.

In this dissertation, I present (1) a new measure to evaluating inter-observer variability in CT measurement of cancer lesions, (2) a peer benchmarking intervention designed to reduce the variability, and (3) deep learning frameworks for semi-automated measurement of solid tumors. First, 13 board-certified radiologists from Prisma Health repeatedly reviewed the same CT image sets of lung lesions and hepatic metastases during three non-contiguous time periods (T1, T2, T3). The intervention tool was presented to the radiologists prior to T3. Various analytical methods were employed to assess the performance of the proposed measure and peer benchmarking intervention tool. Next, a total of 1,506 CT slices selected from 465,152 CT slices were reviewed and measured by three experienced radiologists for training deep learning classifiers. The deep neural network classifiers were trained for binary classification of cancer lesions with size larger or smaller than 32 pixels within a 128 x 128 pixel frame, and final measurement was produced by converting multiple classification results to magnitude in centimeters. The inter-observer variability between a human radiologist and the proposed tool was lower than the inter-observer variability between human radiologists.



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.