How Radiomics Leverages AI & Deep Learning for Informed Decision-Making

May 5, 2021 | Resource

By Robert Holmes, PhD

Radiomics is poised to be the next great advance in the world’s ongoing battle against cancer. Radiomics enables healthcare and life science organizations to analyze traditional images, such as MRIs and PET scans, then use artificial intelligence (AI) to extract more than 2,300 data points about the biology of a tumor or lesion. By comparing this newly available data to past images, as well as the biology of healthy organs, clinicians can gain a much deeper understanding of how a tumor or lesion is responding to a specific therapy, informing care and treatment decisions along every step. Following is a look at how AI and deep learning informs the radiomic decision-making process, in addition to the outcome models produced by radiomics.

Leveraging AI & Deep learning for Informed Decision Making

The process begins when a user selects a patient image and region of interest for analysis. Next, the user marks what they are interested in seeing within the image as part of a segmentation and contouring process that accompanies the annotation and labeling process. Then the image is rendered in 3-D so users can begin to see the structure and understand it as depicted within the patient’s anatomy.

Next comes the extraction of radiomic features, which are available for visualizations that enable users to see the various metrics as they change over time. Users can select from a list of 2,300+ radiomic features for analysis and applied use in research, therapy response or treatment decisions.

At this point in the process, users can begin to integrate other data sets to help understand the context of this radiomic data a little more fully, by marrying it with clinical or genomic data, for example, as well as other types of outcome data that is available from third parties. This integrated data is to build outcome models of some aspect of patient care. Examples include the likelihood of progression free survival, and the classification of patients into those likely to respond positively—or not—to immunotherapy.

HealthMyne Radiomic Outcomes Model Process

But how do we reliably connect this mass of data—including 2,300+ radiomic features for every tumor or lesion to be analyzed—to the individual patients’ outcomes? By using a semi-automated analytics pipeline that takes the approach of visualize—reduce—select—validate. An initial visual “sanity check” identifies any obvious errors, outliers, and patterns that will inform subsequent analysis. Dimensional reduction is used to identify and remove redundant data elements (highly correlated pairs of variables, for example), to estimate the inherent “true” dimension of the problem, and to project the data onto this smaller subspace. Selection involves applying many machine-learning algorithms and scoring their relative performance to identify the most successful model. The model is then subject to rigorous validation on a “holdout” set of previously unseen data; only if it passes this step is a model deemed suitable for deployment.

Once a robust, validated model is obtained it can be deployed in the same system that was responsible for the acquisition of the original images: now when a user segments a lesion in a new or existing patient, they immediately get the relevant patient-specific prediction or classification from the model. This can provide vital insights into therapy response and trigger predictive decision-support alerts to inform clinicians of changes to tumors or lesions that may be indicative of something to investigate further.

Although many of the applications of radiomics today focus on oncology, the field is highly applicable beyond oncology into different kinds of disease and condition areas and therapeutic areas of interest. HealthMyne will expand our AI and Deep Learning enabled solutions into these areas in the future as current and future client work progresses.