Currently, clinicians rely mostly on population-level treatment effects from RCTs, usually considering the treatment's benefits. This study proposes a process, focused on practical usability, for translating RCT data into personalized treatment recommendations that weighs benefits against harms and integrates subjective perceptions of relative severity. Intensive blood pressure treatment (IBPT) was selected as the test case to demonstrate the suggested process, which was divided into three phases: (1) Prediction models were developed using the Systolic Blood-Pressure Intervention Trial (SPRINT) data for benefits and adverse events of IBPT. The models were externally validated using retrospective Clalit Health Services (CHS) data; (2) Predicted risk reductions and increases from these models were used to create a yes/no IBPT recommendation by calculating a severity-weighted benefit-to-harm ratio; (3) Analysis outputs were summarized in a decision support tool. Based on the individual benefit-to-harm ratios, 62 and 84% of the SPRINT and CHS populations, respectively, would theoretically be recommended IBPT. The original SPRINT trial results of significant decrease in cardiovascular outcomes following IBPT persisted only in the group that received a "yes-treatment" recommendation by the suggested process, while the rate of serious adverse events was slightly higher in the "no-treatment" recommendation group. This process can be used to translate RCT data into individualized recommendations by identifying patients for whom the treatment's benefits outweigh the harms, while considering subjective views of perceived severity of the different outcomes. The proposed approach emphasizes clinical practicality by mimicking physicians' clinical decision-making process and integrating all recommendation outputs into a usable decision support tool.
Publications by Author: Morton Leibowitz
D
B
OBJECTIVE: To illustrate the problem of subpopulation miscalibration, to adapt an algorithm for recalibration of the predictions, and to validate its performance.
MATERIALS AND METHODS: In this retrospective cohort study, we evaluated the calibration of predictions based on the Pooled Cohort Equations (PCE) and the fracture risk assessment tool (FRAX) in the overall population and in subpopulations defined by the intersection of age, sex, ethnicity, socioeconomic status, and immigration history. We next applied the recalibration algorithm and assessed the change in calibration metrics, including calibration-in-the-large.
RESULTS: 1 021 041 patients were included in the PCE population, and 1 116 324 patients were included in the FRAX population. Baseline overall model calibration of the 2 tested models was good, but calibration in a substantial portion of the subpopulations was poor. After applying the algorithm, subpopulation calibration statistics were greatly improved, with the variance of the calibration-in-the-large values across all subpopulations reduced by 98.8% and 94.3% in the PCE and FRAX models, respectively.
DISCUSSION: Prediction models in medicine are increasingly common. Calibration, the agreement between predicted and observed risks, is commonly poor for subpopulations that were underrepresented in the development set of the models, resulting in bias and reduced performance for these subpopulations. In this work, we empirically evaluated an adapted version of the fairness algorithm designed by Hebert-Johnson et al. (2017) and demonstrated its use in improving subpopulation miscalibration.
CONCLUSION: A postprocessing and model-independent fairness algorithm for recalibration of predictive models greatly decreases the bias of subpopulation miscalibration and thus increases fairness and equality.
OBJECTIVE: To illustrate the problem of subpopulation miscalibration, to adapt an algorithm for recalibration of the predictions, and to validate its performance.
MATERIALS AND METHODS: In this retrospective cohort study, we evaluated the calibration of predictions based on the Pooled Cohort Equations (PCE) and the fracture risk assessment tool (FRAX) in the overall population and in subpopulations defined by the intersection of age, sex, ethnicity, socioeconomic status, and immigration history. We next applied the recalibration algorithm and assessed the change in calibration metrics, including calibration-in-the-large.
RESULTS: 1 021 041 patients were included in the PCE population, and 1 116 324 patients were included in the FRAX population. Baseline overall model calibration of the 2 tested models was good, but calibration in a substantial portion of the subpopulations was poor. After applying the algorithm, subpopulation calibration statistics were greatly improved, with the variance of the calibration-in-the-large values across all subpopulations reduced by 98.8% and 94.3% in the PCE and FRAX models, respectively.
DISCUSSION: Prediction models in medicine are increasingly common. Calibration, the agreement between predicted and observed risks, is commonly poor for subpopulations that were underrepresented in the development set of the models, resulting in bias and reduced performance for these subpopulations. In this work, we empirically evaluated an adapted version of the fairness algorithm designed by Hebert-Johnson et al. (2017) and demonstrated its use in improving subpopulation miscalibration.
CONCLUSION: A postprocessing and model-independent fairness algorithm for recalibration of predictive models greatly decreases the bias of subpopulation miscalibration and thus increases fairness and equality.