Published: October 2016

Last updated: September 2025

Bootstrapping

Bootstrapping is a non-parametric technique used to estimate the distribution of an important statistic, such as an incremental cost-effectiveness ratio (ICER), from a population sample, such as a clinical trial. Random samples (of the same size as the original sample) are drawn with replacement from the data source. The statistic of interest is calculated from each of these re-samples, and these estimates are stored and collated to build up an empirical distribution for the statistic. From this distribution, measures of central tendency (mean, median) and spread (credible intervals) are obtained. Typically, 1,000 or more bootstrap samples are required. When dealing with ICERs generated from clinical trial or observational data, it is important to generate pairs of values (for costs and effects) for each treatment alternative in the same re-sample. The term ‘bootstrapping’ refers to the apparently impossible feat of pulling oneself up by ones own bootstraps: ‘parametric’ equations for sampling distributions, which may be difficult to estimate (for example for ICERs), are not required. Instead, the method relies on the data’s own observations. The fundamental assumption is that the study sample is an accurate representation of the full population. A number of methods (for example: ‘percentile, ‘bias corrected’) have been developed to estimate confidence intervals from bootstrapped samples in different scenarios, including meta-analyses drawing on multiple datasets.

You may also be interested in