Predicting diagnostic error in radiology via eye-tracking and image analytics

Preliminary investigation in mammography

Sophie Voisin, Frank Pinto, Garnetta Morin-Ducote, Kathleen B. Hudson, Georgia D. Tourassi

Research output: Contribution to journalArticle

14 Citations (Scopus)

Abstract

Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists' gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists' gaze behavior and image content.

Original languageEnglish (US)
Article number101906
JournalMedical physics
Volume40
Issue number10
DOIs
StatePublished - Jan 1 2013

Fingerprint

Mammography
Diagnostic Errors
Radiology
ROC Curve
Area Under Curve
Research Ethics Committees
Breast
Radiologists
Neoplasms
Machine Learning

All Science Journal Classification (ASJC) codes

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Cite this

Predicting diagnostic error in radiology via eye-tracking and image analytics : Preliminary investigation in mammography. / Voisin, Sophie; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.; Tourassi, Georgia D.

In: Medical physics, Vol. 40, No. 10, 101906, 01.01.2013.

Research output: Contribution to journalArticle

Voisin, Sophie ; Pinto, Frank ; Morin-Ducote, Garnetta ; Hudson, Kathleen B. ; Tourassi, Georgia D. / Predicting diagnostic error in radiology via eye-tracking and image analytics : Preliminary investigation in mammography. In: Medical physics. 2013 ; Vol. 40, No. 10.
@article{247a3dfaf5e94eb781ad5617c31fd79f,
title = "Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography",
abstract = "Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists' gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists' gaze behavior and image content.",
author = "Sophie Voisin and Frank Pinto and Garnetta Morin-Ducote and Hudson, {Kathleen B.} and Tourassi, {Georgia D.}",
year = "2013",
month = "1",
day = "1",
doi = "10.1118/1.4820536",
language = "English (US)",
volume = "40",
journal = "Medical Physics",
issn = "0094-2405",
publisher = "AAPM - American Association of Physicists in Medicine",
number = "10",

}

TY - JOUR

T1 - Predicting diagnostic error in radiology via eye-tracking and image analytics

T2 - Preliminary investigation in mammography

AU - Voisin, Sophie

AU - Pinto, Frank

AU - Morin-Ducote, Garnetta

AU - Hudson, Kathleen B.

AU - Tourassi, Georgia D.

PY - 2013/1/1

Y1 - 2013/1/1

N2 - Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists' gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists' gaze behavior and image content.

AB - Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists' gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists' gaze behavior and image content.

UR - http://www.scopus.com/inward/record.url?scp=84885750366&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84885750366&partnerID=8YFLogxK

U2 - 10.1118/1.4820536

DO - 10.1118/1.4820536

M3 - Article

VL - 40

JO - Medical Physics

JF - Medical Physics

SN - 0094-2405

IS - 10

M1 - 101906

ER -