A

A posteriori hypotheses are generated based on induction. They are formed based on empirical observations together with subsequent attempts to hypothesize the underlying cause of the observations. A posteriori tests (also called post-hoc tests) are statistical tests that were not planned before study data were collected. Compared with a priori tests, these are likely to be viewed with some scepticism, because bias may be introduced by deciding what to test and what testing method to use having inspected the study results. Multiple comparison methods have been developed to correct for this possible bias. In Bayesian analysis a posteriori may be used to refer to improved updated estimates of a quantity, based on a priori expectations combined with study observations.

How to cite: A Posteriori (Tests) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/a-posteriori-tests/

 

Contact us today if you would like to be kept updated with our latest training courses:

A priori (literally: ‘from the former’) hypotheses are those based on assumed principles and deductions from the conclusions of previous research, and are generated prior to a new study taking place. They form a typical part of the scientific method, leading to the design of experimental studies and evidence syntheses to test and refine these new useful hypotheses. Statistical analyses to test hypotheses (see hypothesis testing) generally have more credibility when they are planned prospectively, in advance of the collection of the data. A priori hypotheses are distinct from a posteriori hypotheses, which are generated after relevant observations have been made. A priori probabilities and probability distributions are important in Bayesian analyses where they represent expectations of a certain quantity such as the relative effectiveness of an intervention, which may then be integrated with the observations of that quantity in a study to provide an improved, updated estimate a posteriori.

How to cite: A Priori (Tests) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/a-posteriori-tests/

 

Contact us today if you would like to be kept updated with our latest training courses:

The absolute risk of an outcome associated with exposure to an agent (e.g. receiving a particular therapy) is of less value than the difference or change in absolute risk between those exposed and not exposed. This difference, or absolute risk reduction in the case that the agent (therapy) confers protection to the outcome of interest, is of fundamental importance to economic evaluation as it drives the incremental effect in an incremental cost-effectiveness ratio (ICER). ARR is also the reciprocal of Number Needed to Treat (NNT), which is used to communicate effectiveness in evidence-based medicine.

 

How to cite: Absolute Risk Reduction (ARR) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/absolute-risk-reduction/

 

Contact us today if you would like to be kept updated with our latest training courses:

The Accelerated access review (AAR) was set up with the aim of speeding up access to innovative drugs, devices and diagnostics for NHS patients.  Following its report in 2016, the UK government has set up the Accelerated Access Pathway, a specialised system to streamline regulatory and market access decisions for ‘breakthrough’ products.  The Accelerated Access Collaborative (AAC) has been set up to identify and select innovative technologies and provide support and funding to enable them to achieve rapid adoption in the NHS.  The focus is on ‘affordable medicines or technologies which can dramatically improve efficiency, fill an unmet need or make a step-change in patient outcomes’.  So far, 12 ‘rapid uptake’ interventions in 7 technology areas (with ‘full evidence bases already within the system’) have been identified, of which 3 are medicines, 8 diagnostic tests and 1 a device.  In May 2019 it was announced that the role of the AAC will expand to become the new umbrella organisation for UK health innovation.

 

How to cite: Accelerated Access Review (UK) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/accelerated-access-review-uk/

 

Contact us today if you would like to be kept updated with our latest training courses:

B

A base case analysis usually refers to the results obtained from running an economic model with the most likely or preferred set of assumptions and input values. Sensitivity analyses may then be used to explore how the results deviate from those of the base case analysis when input values and/or modelling assumptions are altered. ‘Reference case’ (analysis) may be used as an alternative to base case analysis, especially where analysts are directed to a standard set of modelling assumptions by an HTA organisation such as NICE.

 

How to cite: Base Case Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/reference-case/

 

Contact us today if you would like to be kept updated with our latest training courses:

Bayesian analysis refers to a different approach to statistical inference in which the purpose of collecting new data is to refine the estimate of a particular quantity (often a probability) that may be used for decision-making. This is in contrast to traditional ‘frequentist’ statistics where data are collected to reject or confirm a null hypothesis at a given level of statistical significance. More specifically, Bayesian techniques are used to synthesize information known about a parameter prior to conducting a study with new data from the study to estimate a ‘posterior’ distribution for that parameter. Although the principle of Bayesian inference was first put forward by Rev. Thomas Bayes in the eighteenth century it was not until the powerful computers became widely accessible and new computational methods were developed in the 1980s that application of the technique was widely possible. In healthcare evaluation Bayesian analysis is most commonly seen in network meta-analysis, and in certain aspects of (adaptive) trial design.  It can be argued that much of economic modelling and decision analysis is Bayesian in its approach.

 

How to cite: Bayesian Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/bayesian-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

A bias is a systematic error in an aspect of a study, operating in either direction. “Systematic error” means that, even if the biased study is replicated many times, the wrong result will still be reached on average. This is different to imprecision, which refers to random errors in the conduct of the study, meaning that multiple replications of the study would deliver results that form a distribution centering on the true population value. A bias can be so small as to have no impact on the observed effect, or so large that what appears to be an effect is in fact entirely due to bias. A rigorous systematic review should assess the risk of bias of all studies in the review, as this will have an impact upon the reliability of the review’s conclusions. Bias can appear at any point in a study or trial, for example in the randomisation of participants (“selection bias”), and the choice of which findings to report (“reporting bias”). A more comprehensive overview of the range of possible biases is provided by the Cochrane Collaboration http://bmg.cochrane.org/assessing-risk-bias-included-studies

 

How to cite: Bias [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/bias/

 

Contact us today if you would like to be kept updated with our latest training courses:

Bootstrapping is a non-parametric technique used to estimate the distribution of an important statistic such as an incremental cost-effectiveness ratio (ICER) from a population sample such as a clinical trial. Random samples of the same size as the original sample are drawn with replacement from the data source. The statistic of interest is calculated from each of these resamples, and these estimates are stored and collated to build up an empirical distribution for the statistic, for which measures of central tendency (mean, median) and spread (confidence intervals) are obtained. Typically, 1000 or more bootstrap samples are required. In the case of ICERs generated from clinical trial or observational data it is important to generate pairs of values (for costs and effects) for each treatment alternative in the same re-sample. The term ‘bootstrapping’ refers to the apparently impossible achievement of pulling oneself up by ones own bootstraps: ‘parametric’ equations for sampling distributions, which may be difficult to estimate (for example for ICERs), are not required and instead, the data relies on its own observations. The central and important assumption is that the study sample is an accurate representation of the full population. A number of methods (for example: ‘percentile, ‘bias corrected’) have been developed to estimate confidence intervals from bootstrapped samples in different circumstances, including meta-analyses from more than one dataset.

How to cite: Bootstrapping [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/bootstrapping/

 

Contact us today if you would like to be kept updated with our latest training courses:

Budget impact analyses are used to estimate the likely change in expenditure to a specific budget holder resulting from a decision to reimburse a new healthcare intervention or some other change in policy at an aggregate population level. The budget (or financial) impact is usually calculated using a budget impact model, over a period of 3 to 5 years, at a national level or for more local healthcare payers and providers. In contrast to cost-effectiveness analyses, which are used to estimate value for money, analyses using budget impact models assess affordability. Two scenarios are usually compared: a world in which the new intervention or policy is implemented, and a counterfactual world without the new intervention. Each scenario takes into account population size, patient eligibility, speed of uptake and market share of the intervention, as well as many of the inputs associated with a model-based cost-effectiveness analysis. Budget impact models are commonly used by local or national-level decision makers for planning purposes, especially where (extra) expenditure in one budget is offset by savings in another.

 

How to cite: Budget Impact Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/budget-impact-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

C

The Cancer Drugs Fund (CDF) for England was set up in 2011 as a response to public pressure to fund access to drugs for cancer conditions, which might not be reimbursed under standard technology assessment criteria.  Originally intended as a temporary arrangement as a bridge to implementation of Value Based Pricing, in 2016 it was revised and extended to become a comprehensive managed access scheme for new medicines in cancer, with clear entry and exit criteria.  The CDF now accepts referrals from committees undertaking NICE single technology assessments where the evidence of effectiveness or cost-effectiveness at the time of the appraisal is considered uncertain, but it is plausible that the drug is effective and cost-effective and that availability of new data within a reasonable time-period (typically two to three years) will resolve this uncertainty.  New data may come from ongoing trials or specific analyses of observational datasets such as the UK cancer registry (SACT). Drugs are entered into the CDF using a managed access agreement with a specific commitment to provide relevant data to a subsequent (follow-up) technology appraisal, at which point a decision is made for routine funding (or not) and the drug leaves the CDF.  The revision of the CDF came with a commitment to earlier initiation of NICE appraisals, so that final guidance is issued within 90 days of marketing authorisation, where possible.

 

How to cite: Cancer Drugs Fund (UK) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cancer-drugs-fund-uk/

 

Contact us today if you would like to be kept updated with our latest training courses:

A citation index is a bibliographic database with extra citation analysis features.  One such feature is the ability to view the context of an article in terms of the items it cites, and also the articles that cite it. With this we can view articles referenced by the item (published prior to it) and those referencing an item (published after it). Well known citation indexes include the science citation index (SCI) and the social science citation index (SSCI) provided by Web of Science™.

 

How to cite: Citation Index [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/citation-index/

 

Contact us today if you would like to be kept updated with our latest training courses:

A clinical equivalence study is one where the aim is to show that the outcome for the two (or more) technologies studied differs by a clinically unimportant amount. These studies have similar rationales and characteristics to non-inferiority studies, except that the pre-specified equivalence margin ranges above and below the outcome value for the reference (comparator) intervention, and two-sided significance testing will generally be required. This more technical meaning of ‘clinical equivalence’ should not be confused with its more common use to denote that treatment options are considered to be equivalent (in efficacy, safety) by clinicians.

How to cite: Clinical Equivalence (Study) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/clinical-equivalence-study/

 

Contact us today if you would like to be kept updated with our latest training courses:

According to the FDA, a clinical outcome assessment is a measure that describes or reflects how a patient feels, functions, or survives. Types of COAs include:

  • Patient-reported outcomes (PRO) measures
  • Observer-reported outcome (ObsRO) measures
  • Clinician-reported outcome (ClinRO) measures
  • Performance outcome (PerfO) measures

How to cite: Clinical Outcome Assessment [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/clinical-outcome-assessment/

 

Contact us today if you would like to be kept updated with our latest training courses:

A clinical trial is a research investigation in a clinical setting, designed to supply data on the efficacy and/or safety of a drug, device, treatment or other healthcare issue. Clinical trials may be sponsored by a governmental organisation, an academic research institute, a non-governmental organisation such as a charity or a manufacturer. Clinical trials are (usually) characterised by clear definition of study hypotheses to be tested, strict inclusion and exclusion criteria, randomisation of study subjects, (where possible) blinded administration of therapy and measurement of outcomes, pre-specified protocols and analysis plans, independent oversight including ethical approval, and comprehensive reporting of results. They can be expensive and lengthy to undertake, and high levels of internal consistency may be achieved at the expense of external generalizability of results to more heterogeneous populations who may receive the intervention in routine practice. Clinical trials of medicines involving human subjects are governed by Good Clinical Practice (ICH-GCP), which enforces tight guidelines on ethical aspects of a clinical study, with high standards for all aspects of trial planning, execution and reporting, backed up by quality assurance and inspections. Clinical trials are recorded in a variety of databases including ClinicalTrials.gov (USA), the EU Clinical Trials Register, and national databases accessed by the WHO International Clinical Trials Registry Platform.

 

How to cite: Clinical Trial [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/clinical-trial/

 

Contact us today if you would like to be kept updated with our latest training courses:

Now called ‘Cochrane’ this important collaboration was founded in 1993 in response to Archie Cochrane‘s call in the 1970s for up-to-date, systematic reviews of all relevant randomized controlled trials of health care. The collaboration now numbers over 30,000 researchers in over 100 countries, organised into over 40 regional centres and over 50 subject-specific review groups. Its aim is to organise medical research information systematically to support decision-making (e.g. to adopt new interventions) by health professionals, patients, policy makers and others, according to the principles of evidence-based medicine. Members use standardised methods to undertake high quality and updateable systematic reviews and meta-analyses of randomised controlled trials of interventions, to answer the question: ‘What can be concluded from the totality of the RCT evidence?’ Reviews are all stored in the Cochrane Library. The Collaboration has served as a focal point for development and implementation of good quality research methods in many countries, and some reviews have gone beyond RCTs to non-randomised study designs.

 

How to cite: Cochrane Collaboration [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cochrane-collaboration/

 

Contact us today if you would like to be kept updated with our latest training courses:

Cognitive debriefing is a structured interview technique used in the development of Patient Reported Outcomes (PROs).  Members of the target patient population are interviewed to establish whether all the relevant concepts are covered, they understand the question wording and recall period (the time period to which the questions apply), interpret the questions as intended and use the response scale appropriately.  Within these interviews patients are often asked to complete the instrument while ‘thinking aloud’ and explain the reason for each of their responses, following which specific questions can be asked by the interviewer.

 

How to cite: Cognitive Debriefing [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cognitive-debriefing/

 

Contact us today if you would like to be kept updated with our latest training courses:

The cognitive interview is a method used in the development and refinement of patient-reported outcome measures (PROMs) to capture the patient perspective on a particular domain or item. The interview may take place in the development phase of a PROM or at a subsequent stage in the validation process. A cognitive interview is a one-to-one interview with patients and may be conducted to evaluate patients’ understanding of the questions (or items), the content of the questions and whether the PROM covers all domains that are of relevance to patients with a specific condition.

 

How to cite: Cognitive Interviewing [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cognitive-interviewing/

 

Contact us today if you would like to be kept updated with our latest training courses:

A cohort model is frequently used in economic evaluation to represent the experience of a simulated cohort of patients who receive (or do not receive) a new therapy. The experience of each individual cohort member is not considered in detail. Decision trees and Markov processes are used to estimate the proportion of the cohort who experience health events or health states over time. Events and their associated costs as well as the costs and utilities associated with health states can be multiplied by the relevant proportion of the cohort and aggregated to summarise the experience of the cohort.

 

How to cite: Cohort Models [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cohort-models/

 

Contact us today if you would like to be kept updated with our latest training courses:

Concept elicitation is a term used in outcomes research to describe the process by which concepts (i.e. symptoms and impacts) that are important to patients emerge spontaneously through the use of open-ended questions in an interview setting.

 

How to cite: Concept Elicitation [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/concept-elicitation/

 

Contact us today if you would like to be kept updated with our latest training courses:

The confidence interval around a particular value gives an estimated range around the measured value that is likely to include the true (population) value of the parameter. Confidence intervals are usually estimated from a given set of sample data: their magnitude depends on the inherent variability of the parameter in the population as well as the size of the sample taken (they are closely related to the standard error). If independent samples are taken repeatedly from the same population then a certain percentage (the confidence level) of intervals will contain the true value of the parameter. Most commonly 95% confidence intervals are reported (the true population value will lie in the confidence interval in 95/100 samples), although 99% and 90% are also used. In economic modelling confidence intervals are used to define plausible ranges for values of many input parameters.

 

How to cite: Confidence Interval [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/confidence-interval/

 

Contact us today if you would like to be kept updated with our latest training courses:

Conjoint analysis is the analytical technique used in discrete choice experiments, which is used in healthcare to evaluate preferences from participants (patients, payers, commissioners) for different attributes of a intervention, without directly asking them to state their preferred options.

 

How to cite: Conjoint Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/conjoint-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Construct validity is the evidence such as documentation of empirical findings that support predefined hypotheses on the expected associations among measures similar or dissimilar to the measured PRO. There are two main types of construct validity: (1) Convergent validity refers to whether the outcomes of an instrument correspond to that of another instrument measuring the same or similar constructs. For example, you would expect people who report a high utility on the EQ-5D to also report a high utility on the SF-36 (i.e. their responses to both instruments should be highly correlated). (2) Discriminant validity is essentially the opposite, referring to whether the outcomes of two instruments measuring a theoretically unrelated construct are very weakly or negatively correlated. For example, if you were measuring introversion, you would anticipate that the outcome of an instrument designed to measure introversion would be negatively correlated with that produced by an instrument designed to measure extroversion.

How to cite: Construct Validity [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/construct-validity/

 

Contact us today if you would like to be kept updated with our latest training courses:

Content validity is the evidence that a patient and experts considered in the development of a patient reported outcome measure (PROM) and the content of the PRO measure relevant and comprehensive for the concept, population, and aim of the measurement application. This includes documentation of as follows: (1) qualitative and/or quantitative methods used to request and ratify attributes of the PRO relevant to the measurement application; (2) the characteristics of participants included in the evaluation (e.g., race/ethnicity, culture, age, gender, socio-economic status, literacy level) with an emphasis on similarities or differences with respect to the target population; and (3) justification for the recall period for the measurement application.

How to cite: Content Validity [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/content-validity/

 

Contact us today if you would like to be kept updated with our latest training courses:

Whilst ‘cost’ is typically thought of in everyday sense as the monetary price that must be paid in order to acquire something, in economic evaluation it is preferable to consider cost as the (monetary) value of anything that has to be sacrificed in order to acquire something.  Thus, the cost of an item could reflect its monetary price, but could also reflect the time that must be spent in order to obtain the item.  In health economics, costs usually reflect the expenditure of the healthcare system on resources such as treatments, monitoring, staff time and other consumables.  These costs are, however, better thought of as the opportunity cost associated with ‘what other benefits could the use of those same resources have achieved?’  Thus, the true ‘cost’ of the use of a resource may depend upon whether it was already being used to its full capacity.  Costs are often categorised into different types, such as direct and indirect costs (reflecting whether the costs fall to the health and social care provider, or to other sectors) or fixed and variable costs (reflecting the initial payment for equipment and the additional cost per use of the consumables). Another important distinction is between average cost and marginal cost: the latter (more important for economic evaluation) being the additional cost of one further unit of resource, which frequently declines as more resource units are consumed. Incremental cost, denoting the difference in overall costs associated with the use of an intervention compared with the use of an alternative, is usually a key output of an economic evaluation.

 

How to cite: Cost [online]. (2016). York; York Health Economics Consortium; 2016. http://www.yhec.co.uk/glossary/cost/

 

Contact us today if you would like to be kept updated with our latest training courses:

Cost minimisation analysis is a method of comparing the costs of alternative interventions (including the costs of managing any consequences of the intervention), which are known, or assumed, to have an equivalent medical effect. This type of analysis can be used to determine which of the treatment alternatives provides the least expensive way of achieving a specific health outcome for a population.

 

How to cite: Cost Minimisation Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-minimisation-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Cost of illness (COI) is a summary of the costs of a particular disease to society. This value includes direct costs of treating the disease such as healthcare system costs for diagnosis, treatment and management of disease progression and patients’ own costs (travel, over-the-counter medication), as well as indirect costs such as productivity loss resulting from time off employment. In a large proportion of the many reported cost-of-illness studies, estimates of resource utilisation are derived from different survey and registry sources, which are converted to costs using representative ‘unit costs’ and then aggregated across relevant population cohorts. In US cost of illness studies, charges may be used and converted to estimated costs using cost-to-charge ratios. Cost-of-illness studies are often used to highlight the large burden associated with particular conditions, as well as differentials by patient and other characteristics. Although such studies can provide useful baseline values for economic evaluations, they are not economic evaluations, as they do not estimate how the cost burden might change if new interventions are introduced. Costs ‘attributable’ to a condition may be estimated by comparing costs of matched samples with and without the condition. Calculation of ‘attributable costs’ for conditions that may be found in the same individuals (e.g. obesity and diabetes) may not be straightforward: cost-of-illness studies have often been criticised for overestimating disease-specific costs.

 

How to cite: Cost of Illness [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-of-illness/

 

Contact us today if you would like to be kept updated with our latest training courses:

In healthcare evaluation cost-benefit analysis (CBA) is a comparison of interventions and their consequences in which both costs and resulting benefits (health outcomes and others) are expressed in monetary terms. This enables two or more treatment alternatives to be compared using the summary metric of net monetary benefit, which is the difference between the benefit of each treatments (expressed in monetary units) less the cost of each. Monetary valuations of benefits are commonly obtained through willingness to pay (WTP) surveys or discrete choice experiments (DCEs). Although popular in other fields, CBA is not commonly used in health technology assessment due to difficulty of associating monetary values with health outcomes such as (increased) survival. Most commonly CBAs have been used to assess large capital development projects (new hospital facilities) or interventions that improve waiting times or location/access to services.

 

How to cite: Cost-Benefit Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-benefit-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

The cost-effectiveness acceptability curve (CEAC) is a graph summarising the impact of uncertainty on the result of an economic evaluation, frequently expressed as an ICER (incremental cost-effectiveness ratio) in relation to possible values of the cost-effectiveness threshold. The graph plots a range of cost-effectiveness thresholds on the horizontal axis against the probability that the intervention will be cost-effective at that threshold on the vertical axis. It can usually be drawn directly from the (stored) results of a probabilistic sensitivity analysis. The CEAC helps the decision-maker to understand the uncertainty associated with making a particular decision to approve or reject a new heath technology.

 

How to cite: Cost-Effectiveness Acceptability Curve (CEAC) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-effectiveness-acceptability-curve-ceac/

 

Contact us today if you would like to be kept updated with our latest training courses:

The cost-effectiveness acceptability frontier (CEAF) is a graph summarising the uncertainty around the cost-effectiveness of the interventions compared in a model, by indicating which strategy is economically preferred at different threshold values for cost-effectiveness. Similar to the cost-effectiveness acceptability curve, the graph plots a range of possible cost-effectiveness thresholds on the horizontal axis against the probability that an intervention of interest will be cost-effective (at the given threshold value) on the vertical axis. As the threshold increases the economically preferred treatment changes, the switch point being where the threshold value increases beyond the relevant ICER reported for the intervention of interest. This type of presentation is particularly useful if there are three or more alternatives being compared, in which case there may be two or more switch points at different threshold values.

 

How to cite: Cost-Effectiveness Acceptability Frontier (CEAF) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-effectiveness-acceptability-frontier-ceaf/

 

Contact us today if you would like to be kept updated with our latest training courses:

Cost-effectiveness analysis evaluates the effectiveness of two or more treatments relative to their cost. The aim of the decision maker when assessing a new intervention is to maximise outcomes (i.e. QALYs) and minimise opportunity costs. Cost-effectiveness analysis is the method used to measure these outcomes. In England, the decision around whether an intervention is cost effective is made by The National Institute for Health and Care Excellence (NICE). Interventions that are both more effective at producing health benefits than other interventions and are associated with net cost savings (i.e. the additional cost of the intervention is outweighed by the cost savings elsewhere) are said to be a dominant strategy. In the event that a cost-effectiveness analysis shows that that an intervention is more effective and more costly, the decision makers weigh up the additional costs against the additional QALYs. NICE is generally willing to pay around £20,000 per QALY gained by a new treatment. For example if a cost-effectiveness analysis showed that an intervention produced 0.5 additional QALYs and was associated with additional costs of no more than £10,000, then the intervention would be considered cost effective.

 

How to cite: Cost-Effectiveness Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-effectiveness-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

The cost-effectiveness frontier is the line connecting successive points on a cost-effectiveness plane which each represent the effect and cost associated with different treatment alternatives. The gradient of a line segment represents the ICER of the treatment comparison between the two alternatives represented by that segment. The cost-effectiveness frontier consists of the set of points corresponding to treatment alternatives that are considered to be cost-effective at different values of the cost-effectiveness threshold. The steeper the gradient between successive points on the frontier, the higher is the ICER between these treatment alternatives and the more expensive alternative would be considered cost-effective only when a high value of the cost-effectiveness threshold is assumed. Points not lying on the cost-effectiveness frontier (usually above and to the left of the frontier) represent treatment alternatives that are not considered cost-effective (compared with a relevant alternative lying on the frontier) at any value of the cost-effectiveness threshold.

 

How to cite: Cost-Effectiveness Frontier [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-effectiveness-frontier/

 

Contact us today if you would like to be kept updated with our latest training courses:

The cost-effectiveness plane is used to visually represent the differences in costs and health outcomes between treatment alternatives in two dimensions, by plotting the costs against effects on a graph. Health outcomes (effects) are usually plotted on the x axis and costs on the y axis. Frequently ‘current practice’ is plotted at the origin, and so the x and y values represent incremental health outcomes and incremental costs versus current practice. More than two points can be represented on the plane, with the line connecting cost-effective alternatives being called the cost-effectiveness frontier. The cost-effectiveness plane is divided into four quadrants: most cost-effectiveness analyses deliver results in the north-east (NE) quadrant, in which new interventions generate more health gains but are more expensive. Other quadrants are relevant when a new intervention generates poorer health outcomes (NW or SW) or lower costs (SW or SE). Cost-effectiveness planes are also useful to show the uncertainty around cost-effectiveness outcomes, often represented as a cloud of points on the plane corresponding to different iterations of an economic model in a (probabilistic) sensitivity analysis.

 

How to cite: Cost-Effectiveness Plane [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-effectiveness-plane/

 

Contact us today if you would like to be kept updated with our latest training courses:

The cost-effectiveness threshold is the maximum amount a decision-maker is willing to pay for a unit of health outcome. If the cost-effectiveness (ICER) of a new therapy (compared with a relevant alternative) is estimated to be below the threshold, then (other things being equal) it is likely that the decision-maker will recommend the new therapy. However for values near the threshold, the level of uncertainty may become important. Thresholds are often established by analysis of previous (reimbursement) decisions: they are not themselves outputs of cost-effectiveness analyses, but guides (or rules) to interpretation of these outputs for decision-making, and they are specific to each unit of health outcome used. They are closely related to the economic concept of ‘opportunity cost’, in which the value of an intervention is considered to be the value of what is foregone in order to implement the intervention. The threshold value stands for the health outcome that could have been achieved if the resource required to implement the intervention of interest had been used elsewhere. Although some countries (e.g. NICE for England and Wales) make the thresholds that they use explicit, in other countries the thresholds may not be explicit and they may vary by health care sector or disease area.

 

How to cite: Cost-Effectiveness Threshold [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-effectiveness-threshold/

 

Contact us today if you would like to be kept updated with our latest training courses:

Cost-utility analysis is a type of cost-effectiveness analysis in which the (incremental) cost per quality-adjusted life year (QALY), or some other preference-based valuation of heath outcome, is estimated. Two alternative interventions are assessed by comparing how many additional QALYs are gained at what additional cost. The use of QALY as a measure of health outcome enables comparisons to be made across disease areas, particularly useful for broad-based resource allocation decision-making. Cost-utility analyses are frequently required by health technology assessment agencies, such as the National Institute for Health and Care Excellence (NICE) in the UK.

 

How to cite: Cost-Utility Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/cost-utility-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Coverage with evidence development (CED) is a form of managed entry agreement for new health technologies, especially pharmaceuticals, where the technology is reimbursed (‘covered’) a limited period of time with a specific requirement for the collection and presentation of further evidence.  The term was popularised by the US Centers for Medicare and Medicaid Services (CMS), and similar approaches carrying different names (e.g. managed access agreement, ‘conditional reimbursement’, ‘interim funding’) are applied in many other countries.  The use of CED reflects the increasing demand for early access to new interventions for patients with high level of unmet need, when there remain important uncertainties about clinical and cost-effectiveness. Frequently CED is applied with a price discounts or a patient access scheme, which can be revised when more evidence is available.  Health care systems have found it challenging to put into place a large number of (varied) evidence generation schemes, and there have been questions about the usefulness of the new evidence (timeliness, failing to resolve uncertainty) and who pays for the new activity.

 

How to cite: Coverage with Evidence Development [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/coverage-with-evidence-development/

 

Contact us today if you would like to be kept updated with our latest training courses:

Credibility intervals are used in Bayesian analysis to provide predictive indicators of the distribution of a given outcome.  Whilst they can be analogous to frequentist-based confidence intervals, credibility intervals reflect the probabilistic nature of the analysis.  Credibility intervals are commonly used to represent the degree of uncertainty in the outputs of network meta-analyses.

 

How to cite: Credibility Interval [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/credibility-interval/

 

Contact us today if you would like to be kept updated with our latest training courses:

Criterion validity is the degree to which the scores of a PRO measure are an adequate reflection of a “gold standard.” There are two types of criterion validity: (1) Concurrent validity is demonstrated when the outcomes of the instrument are highly correlated with those of the criterion. (2) Predictive validity is demonstrated when the outcomes of the instrument are highly correlated with those of the criterion that can only be assessed at some point in the future (i.e. after the instrument has been administered). The criterion validity is only as good as the validity of the gold standard being compared against.

How to cite: Criterion Validity [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/criterion-validity/

 

Contact us today if you would like to be kept updated with our latest training courses:

Critical appraisal is the process of systematically assessing the report of a piece of research (for example, a clinical trial, meta-analysis or cost-effectiveness analysis) in terms of the validity and correct application of its methods, correct reporting of the results and justification for the interpretation of the results. Critical appraisal can be performed on many types of research reports relevant to health technology assessment, and a number of authoritative checklists are available to guide the process for each type of research (See http://www.cebm.net/).

 

How to cite: Critical Appraisal [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/critical-appraisal/

 

Contact us today if you would like to be kept updated with our latest training courses:

D

Data extraction is the process of retrieving relevant information and data from a data source. Data for systematic reviews can come from a range of sources including both published and grey literature. A data extraction form is designed to capture the information of interest in a structured and systematic way, allowing easy manipulation and analysis at later stages of the review, as well as quality checking of the extraction process.

 

How to cite: Data Extraction [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/data-extraction/

 

Contact us today if you would like to be kept updated with our latest training courses:

A database search is a query created and performed in one or more (literature) databases to retrieve studies relevant to an information need. Searches need to be adapted to the differing functionality and search syntax that may characterise different databases. Searching using subject headings and text words is supported in most databases. Complex search queries can be created using word truncation, phrase searches, word adjacency, limits (such as date or publication type) or Boolean terms (‘AND, ‘OR’…) to combine different concepts.

 

How to cite: Database Search [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/database-search/

 

Contact us today if you would like to be kept updated with our latest training courses:

A decision tree is a form of analytical model, in which distinct branches are used to represent a potential set of outcomes for a patient or patient cohort. A decision tree consists of a series of ‘nodes’ where branches meet: each node may take the form of a ‘choice’ (a decision about which alternative intervention to use) or a ‘probability’ (an event occurring or not occurring, governed by chance). Probabilities at any specific node must always add to 1. Costs and outcomes are assigned to each segment of each branch, including the end (‘leaf’) of each branch. Outcomes and costs for each branch are combined using branch possibilities and the tree is ‘rolled back’ to a decision node, at which the expected outcome and cost for each treatment alternative can be compared. Decision trees are frequently used to model interventions that have distinct outcomes that can be measured at a specific time point, as opposed to evaluations where the timing of the outcome is important.

 

How to cite: Decision Tree [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/decision-tree/

 

Contact us today if you would like to be kept updated with our latest training courses:

The Delphi method was originally developed as a structured approach for collecting opinions about the future and judging the likelihood of future events or situations. In health technology assessment (HTA) and economic evaluation this method has been adapted (sometimes referred to as ‘modified Delphi’) to assess assumptions and estimate parameters (e.g. for economic modelling where source information is lacking or may be subject to bias.  In this method a group of experts reply anonymously to questionnaires, and subsequently receive feedback in the form of a summary representation of the group response, following which each may modify their response. The process repeats itself over a number of rounds until expert consensus is reached. Key aspects of the process are selection of appropriate experts, careful construction of questionnaires and summary feedback provided iteratively to experts, and anonymity of the experts, who are therefore not influenced by the dynamics of a group discussion. There is no guarantee of reliability (different panels of experts may come to different consensus views), and so sensitivity analysis may be required to test the impact of uncertainty in parameters derived by this method.  HTA agencies such as NICE generally prefer values to be derived from observational datasets than from expert opinion.

 

How to cite: Delphi Method [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/delphi-method/

 

Contact us today if you would like to be kept updated with our latest training courses:

Deterministic sensitivity analysis (DSA) is a method that can be used to investigate the sensitivity of the results from a model-based analysis to variations in a specific input parameter or set of parameters. One or more parameters are manually changed (usually across a pre-specified range) and the results are analysed to determine to what extent the change has an impact on the output values. The range of variation of each parameter is usually pre-specified, and where appropriate it corresponds to the uncertainty in that parameter reported in source studies (for example 95% confidence interval for efficacy from a source trial or meta-analysis). In univariate sensitivity analysis one parameter is varied at a time, whilst in multivariate sensitivity analysis more than one parameter is varied simultaneously. The results of deterministic sensitivity analysis are usually expressed as line graphs or bar charts. A ‘tornado chart’ refers to summary (stack) of bar graphs representing univariate sensitivity analyses for a wide range of input values, ordered according to the extent (spread) of variation of the resulting model output value (with the widest variation on top). It is usually not possible to vary more than 4 to 5 parameters at the same time in this form of analysis: probabilistic sensitivity analysis is required to assess the impact of simultaneous variation of many input parameters. Univariate sensitivity analyses should be viewed with caution where input parameters are highly correlated (i.e. where parameters correlated to the parameter of interest are not varied together with the latter), such as sensitivity and specificity of diagnostic tests or utility of pre- and post-progression health states.

 

How to cite: Deterministic Sensitivity Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/deterministic-sensitivity-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

A diagnostic test accuracy review is a specific type of systematic review focusing on finding, summarizing and synthesizing the results of studies providing evidence on the performance (“accuracy”) of a particular diagnostic test:  the test’s ability to identify correctly those with the target condition (sensitivity), and those without the condition (specificity). This type of systematic review may be used to draw up a ROC curve (sensitivity vs specificity) for the test, and also to assess why different studies report different levels of accuracy for the same diagnostic test. It may also compare reported accuracy with that for other diagnostic tests designed to find the same condition.

 

How to cite: Diagnostic Test Accuracy Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/diagnostic-test-accuracy-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

The disability-adjusted life year (DALY) is a generic measure of health effect that can be used in cost-effectiveness analysis as an alternative to the quality-adjusted life year (QALY). Originally developed by the World Bank in 1990 DALYs have been adopted by the World Health Organisation (WHO) as a favoured way of comparing overall health and life expectancy in different countries. They are a measure of overall disease burden, expressed as the number of years lost due to ill-health, disability or early death. A DALY represents one year of healthy life, and is usually expressed as DALYs lost compared with theoretical maximum, this being a life with maximum achievable life-expectancy and without disability or disease. In some calculations years of healthy life are age-weighted, resulting in disability in younger years having a higher overall impact on overall DALY scores. Based on the ‘person trade-off’ technique, disability weights for (seven levels of) conditions have been determined by a panel of experts, and aggregate DALYs lost (globally and by country) calculated for a wide range of conditions. DALYs differ from QALYs in that the weights are not based on population surveys (of references for health states) and the two components (reduced survival and increased disability) are added and not multiplied. In Western countries psychiatric conditions (generating large amounts of time spent with disability, but not reduced survival) are prominent as leading causes of lost DALYs.

 

How to cite: Disability-Adjusted Life-Years (DALYs) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/disability-adjusted-life-years-dalys/

 

Contact us today if you would like to be kept updated with our latest training courses:

Economic evaluations refer to a choice to be made between alternative interventions at a specific point in time, however the costs and health outcomes associated with each intervention occur at different points in time, present or future. Costs and health outcomes that are predicted to occur in the future are usually valued less than present costs, and so it is recommended that they be discounted in analysis. This is usually achieved by expressing the results as series (streams) of health outcomes and costs over time, applying a discounting factor to each value in the series and then aggregating to give a ‘present value’ of each stream. The discount factor increases over time, based on an underlying discount rate. NICE guidelines recommend that costs and health outcomes should be discounted at 3.5% per year. So, 1 QALY (or £100) experienced/spent in Year 2 would have a present value of 0.966 QALYs (£96.62). For Year 11, the present values would be 0.709 QALYs (£70.89). The choice of discount rate is of particular importance in preventive healthcare interventions: the use of a higher discount rate results in less value being attached to costs and health outcomes in the future.

 

How to cite: Discount Rate [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/discount-rate/

 

Contact us today if you would like to be kept updated with our latest training courses:

A discrete choice experiment is a quantitative method increasingly used in healthcare to elicit preferences from participants (patients, payers, commissioners) without directly asking them to state their preferred options. In a DCE participants are typically presented with a series of alternative hypothetical scenarios containing a number of variables or “attributes” (usually ≤5), each of which may have a number of variations or “levels”. Participants are asked to state their preferred choice between 2 or 3 competing scenarios, each of which consists of a combination of these attributes/levels. Typically survey instruments include 5-10 of such choices to be completed. Preferences are revealed without participants explicitly being asked to state their preferred level for each individual attribute. For example, a pharmaceutical company might be interested in determining patient preferences for a painkiller provided either as a tablet or liquid formulation. Attributes (and levels) tested in a DCE might consist of “time for painkiller to work” (<10 minutes, 10-30 minutes, >30 minutes), “convenience” (inconvenient, convenient) and “number of repeat doses required” (0, 1-2, ≥3). Examples of the use of DCEs in healthcare evaluation include assessment of patient preferences for diagnostic services, clinic configurations or different routes of administration for medicines.

 

How to cite: Discrete Choice Experiment (DCE) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/discrete-choice-experiment-dce/

 

Contact us today if you would like to be kept updated with our latest training courses:

Discrete event simulation (DES) is a computer-modelling technique used in economic evaluation of health interventions in which individual patient experience is simulated over time, and events occurring to the patient and the consequences of such events are tracked and summarised. Unlike cohort Markov models, in DES movements between patients’ health states are usually driven by events which may occur at varying times (rather than during cycles of fixed length), and time-to-event distributions are required for each event. Life courses of events and health states are constructed for a succession of individual modelled patients, which may then be aggregated over time to produce the summary experience of a patient cohort. Event likelihoods are driven by individual patient characteristics, which are recorded at baseline and may be updated as the patient experience (events, new health states) accumulates. Events and health states can be associated with resource use/cost and utilities. DES is likely to be useful for modelling complex conditions with many possible types of event and health state (e.g. complications of diabetes) or situations where the patient’s history may impact on future events. The building up of individual patient histories in DES gives this method some attraction especially to clinician reviewers. However the modelling can become more complex than more straightforward techniques (e.g. cohort Markov) and deriving time-to-event input values and distributions can be challenging.

 

How to cite: Discrete Event Simulation [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/discrete-event-simulation/

 

Contact us today if you would like to be kept updated with our latest training courses:

A disease model is a simplified mathematical representation of the course of a disease over time in a patient cohort.  In health technology assessment, disease models are used to represent the progression of chronic diseases and the impact of risk factors of interest on disease incidence, progression and mortality. Often a micro-simulation approach is used, but simpler models may use a cohort Markov design (Markov model). Model inputs generally come from epidemiological studies. Disease models may be used to assess the potential impact of new therapies through varying the rates of progression or the balance of risk factors.  They may form the basis of economic models, where specific treatment options are compared and treatment costs and utility values are included as inputs.

 

How to cite: Disease Model [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/disease-model

 

Contact us today if you would like to be kept updated with our latest training courses:

Distributions in statistics are often used to describe the spread of values for a particular characteristic that is measured in a population (which may be an ‘input parameter’ for an economic evaluation). For example, although we may know the mean age of a population, nearly all individuals will have ages falling above or below this mean value, and not necessarily in a uniform manner. Parameter distributions are frequently defined using a mean and standard deviation (which fully define a normal distribution), or by means of “shape” and “scale” parameters for more complex distributions. Commonly used distributions In economic modelling are symmetrical, such as the normal distribution, often used for parameters such as population age and intervention effectiveness (e.g. relative risk reduction), or skewed, such as gamma or lognormal, for ratios or for parameters such as costs which cannot be negative. Distributions that describe a mutually exclusive set of outcomes, such as binomial, Poisson, beta or Dirichlet are used to represent input parameters that are probabilities. Specifying model input parameters as distributions (not just fixed values) enables probabilistic sensitivity analysis to be performed, allowing the uncertainty of the model outputs (e.g. incremental cost-effectiveness ratio) to be described and assessed.

 

How to cite: Distributions [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/distributions/

 

Contact us today if you would like to be kept updated with our latest training courses:

Disutility represents the decrement in utility (valued quality of life) due to a particular symptom or complication. Disutility values are often expressed as a negative value, to represent the impact of the symptom or disease. They may be derived by subtracting utility values for a health state which includes the component (symptom, complication) of interest from a health state that is identical except for the absence of that component. Disutilities may be combined (usually additively, although occasionally multiplicative combinations are used) to provide a combined value of their collective impact on a patient’s quality of life. However, as with utilities, this needs to be done with care as there are situations where disutility (A+B) ≠ (disutility (A) + disutility (B)) when the individual and combined health states are valued independently.

 

How to cite: Disutility [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/disutility/

 

Contact us today if you would like to be kept updated with our latest training courses:

A dominant treatment option is one that is both less costly and results in better health outcomes than the comparator treatment (the former ‘dominates’ the latter). Whatever cost-effectiveness threshold is used it is impossible that the comparator could be considered economically preferable.  Conversely, a treatment option that is both more expensive and results in poorer health outcomes is said to be ‘dominated’.

 

How to cite: Dominance [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/dominance/

 

Contact us today if you would like to be kept updated with our latest training courses:

E

Early economic models are developed to allow the user to gain an understanding of the likely cost-effectiveness of treatment alternatives under different (often future) circumstances. They are most commonly used to assess the likely cost-effectiveness that will be associated with different results of clinical trials that ae planned or in progress, together with different costs (including drug price) of the new intervention. They can also be used to determine the relative importance of different model parameter inputs (to which input parameters in the economic model is the cost-effectiveness result most sensitive). This may help inform decisions on target populations, pricing, and prioritisation of further research. Early models may be developed into ‘final economic models’ used to support published cost-effectiveness analyses and submissions for reimbursement when definitive results are available from source clinical studies. However the value of an early model often lies in its simplicity, where expert judgment has been used to focus on the central drivers of the economic analysis.

 

How to cite: Early Modelling / Early Model [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/early-modelling-early-model/

 

Contact us today if you would like to be kept updated with our latest training courses:

Economic evaluation in healthcare is the analysis of the costs and effects of alternative interventions that may be given to a defined population in order to support decision-making about reimbursement or implementation of the preferred interventions. Both the immediate costs and health effects and their ‘downstream’ consequences (future events averted) are considered. The output/result of an economic evaluation is an incremental cost-effectiveness ratio, which may be compared with a threshold value (willingness to pay for a unit of health outcome).  The choice of type of output included underpins the classification of economic evaluation into cost-effectiveness (health effect), cost-utility (QALY) and cost-benefit (valuation of outcome in money terms). Comparisons considered in an economic evaluation may best be considered as treatment strategies: alternative courses of action to be including diagnostic work-up, treatment, monitoring and management of consequences. Key considerations are the perspective (whose costs), time horizon (over what period into the future) and metric of outcome/effect to be used. Most economic evaluations rely on modelling to synthesise the outputs of different studies, although some evaluations are based directly on trials or other comparative studies. An important part of an economic evaluation is a sensitivity analysis which is the robustness of the results to choices of model input values or assumptions underlying the analysis. More sophisticated evaluations may include assessment of indirect impacts (e.g. on other family members or broader economic consequences), distributional consequences (equity), or the value of collecting more information to reduce the uncertainty of the economic result (value of information analysis).

 

How to cite: Economic Evaluation [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/economic-evaluation/

 

Contact us today if you would like to be kept updated with our latest training courses:

Economic modelling refers to the development of a model that is a simplified representation of the real world and is useful in supporting decision-making. In economic evaluation of healthcare interventions modelling synthesizes clinical, epidemiological and economical evidence from appropriate (and different) sources into an evaluation framework to derive an estimate for a specific outcome, for example an incremental cost-effectiveness ratio. Economic modelling is based on a specific design/structure, a range of modelling assumptions, and a set of input parameters. Common designs are decision trees, cohort Markov models, micro-simulations and (less frequently) discrete event simulations. Uncertainty surrounding a point estimate of the model outcome can be investigated by conducting sensitivity analysis, based on an understanding of uncertainty in the input parameters tin the model and associated with the model structure.

 

How to cite: Economic Modelling [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/economic-modelling/

 

Contact us today if you would like to be kept updated with our latest training courses:

The economically justifiable price (EJP) reflects the maximum price that could be set for a healthcare intervention, such that it will still be deemed to be an efficient use of limited healthcare resources.  This is often estimated as the price that would result in an incremental cost-effectiveness ratio being equal to, or just below, the willingness to pay threshold.  Of course, this does not mean that a recommendation should be made that the intervention’s price be set to equal its EJP, since there may be many areas of uncertainty in deriving the EJP, and also many other factors may have a bearing on the pricing decision.  In fact, if an intervention’s price were set to equal to its EJP, the net health benefit associated with introducing the intervention would be exactly zero.

 

How to cite: Economically Justifiable Price [online]. (2016). York; York Health Economics Consortium; 2016. http://www.yhec.co.uk/glossary/economically-justifiable-price/

 

Contact us today if you would like to be kept updated with our latest training courses:

 

Effect size is a statistical concept that is used to measure the strength of the relationship between two variables on a numeric scale. This metric is used in statistical testing of the null hypothesis (usually that the effect is zero). Metrics for effect size may be in some form of physical unit, such as differences in blood pressure or blood glucuse level. More commonly they are ‘unit free’, such as Pearson’s correlation co-efficient (r), standardised difference of means or Cohen’s d, and coefficients in regression equations. For binary data relative risk (and relative risk reduction) is frequently used for effect size in clinical trials, and odds ratios are especially useful for combining the results of many studies in meta-analyses. In hypothesis testing, effect size, power (1–b), sample size, and critical significance level (a) are related to each other: so, for example, the desired effect size to be detected, combined with a and b may be used to calculate the sample size for a clinical study.

How to cite: Effect Size [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/effect-size

 

Contact us today if you would like to be kept updated with our latest training courses:

Effectiveness refers to the ability of an intervention (drug, device, treatment, test, pathway etc.) to provide the desired outcome(s) in the relevant patient population. Effectiveness is typically assessed through a clinical trial. When considered at the same time as cost-effectiveness and side effects, the effectiveness of an intervention can be measured in QALYs, or quality-adjusted life years.

 

How to cite: Effectiveness [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/effectiveness/

 

Contact us today if you would like to be kept updated with our latest training courses:

Efficacy is the benefit of an intervention gained under ideal conditions, such as in a randomised controlled trial.  ‘Ideal conditions’ refers to an experimental and controlled setting, where contextual factors (treatment administration, population characteristics, healthcare system characteristics) are fixed and balanced across the two (or more) study groups through randomisation, blinding and standardisation.  The design of the clinical trial is usually optimised to show the greatest benefit of the investigated intervention(s). Regulatory authorities are primarily interested in the balance of efficacy and harms (adverse effects) measured in clinical trials, whereas in health technology assessment the focus is on benefits that are achieved in usual practice, where the contextual factors are not fixed.

 

How to cite: Efficacy [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/efficacy/

 

Contact us today if you would like to be kept updated with our latest training courses:

In health economics efficiency refers to either obtaining the greatest health benefit from interventions using the available resources, or achieving a given health benefit in a way that minimises costs/resource use.  A distinction is often made between technical efficiency and allocative efficiency.  Technical efficiency concerns how best to achieve a single, given, objective: for example, how can a procedure such as a coronary artery bypass graft (CABG) be undertaken to achieve the best outcome for patients in terms of quality-adjusted survival? Alternatively, how can a CABG be successfully performed by making the best use of the resource inputs (hospital staff and theatre time, diagnostic work-up, medications, follow-up care…)?  Allocative efficiency concerns competing objectives, which may not all be capable of implementation: whether to implement a particular intervention (and if so how much), given competing demands on the health care budget? What mix of interventions delivers the greatest health gain?  For example, should we expand the use of CABG to a new group of patients, or should we invest in a new screening programme for patients at risk of coronary artery disease?  Opportunity cost is a key concept in making judgements about allocative efficiency, often represented in health technology assessment by a willingness to pay threshold.

 

How to cite: Efficiency [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/efficiency/

 

Contact us today if you would like to be kept updated with our latest training courses:

The EuroQol Five Dimension (EQ-5D) is an example of a generic measurement of quality of life, which is used in many clinical trials and other prospective studies. The EQ-5D questionnaire consists of 5 questions relating to different domains of quality of life (mobility, self-care, usual activities, pain/discomfort, anxiety/depression) for each of which there are 3 levels of response (no problems, some problems or severe problems). More recently, questions with 5 levels of response have been introduced (called EQ-5D-5L). The instrument is quick and easy to use, extensively researched/validated and translated into many different languages. Most importantly it is not disease-specific and therefore applicable to most disease areas (but less applicable in mental health and diseases causing specific disabilities such as blindness) and comparisons of interventions across disease areas. The importance of EQ-5D is that, based on population surveys, utility scores are available for each of the possible responses. These can be combined with appropriate durations for each measurement to generate quality-adjusted life years for each study subject, which in can be used to drive cost-utility analyses of the intervention of interest. It is generally recommended that in clinical trials EQ-5D is administered alongside a disease specific quality of life instrument which is more sensitive to specific aspects of the condition and the potential impact of therapy. NICE guidelines state that the EQ-5D is the preferred measure of HRQL in adults, and “when EQ-5D data are not available or are inappropriate for the condition or effects of treatment, the valuation methods should be fully described and comparable to those used for the EQ-5D”.

 

How to cite: EQ-5D [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/eq-5d/

 

Contact us today if you would like to be kept updated with our latest training courses:

When used in healthcare, equity in health refers to the fairness in the distribution of health across individuals. It may also refer to the distribution of health care (for example, expenditure, utilisation or access to care), from which equity in health is assumed to be derived.  Equity is rooted in ethical principles of distributive justice: its application recognises the importance not only of maximising health gains (efficiency) but also in achieving a fair distribution of these gains.  Opinions may differ as to which distributional aspects are considered relevant.  WHO defines equity as ‘the absence of avoidable or remediable differences among groups of people, whether those groups are defined socially, economically, demographically, or geographically’.  ‘Horizontal equity refers to those with equal needs receiving the same treatment, irrespective of characteristics (demographic or socio-economic) unrelated to need.  Vertical equity refers to treating those with different needs in an appropriately different manner. In the UK, NICE has explored social values that may underpin deliberative decision-making by consulting its citizens’ panel.  Some social values suggest that certain groups may be more deserving of health gains: for example those with severe disease (addressed by NICE’s end-of-life criteria) or at risk of imminent death (‘rule of rescue’), illnesses resulting from factors outside the person’s control, or rare diseases, or where non-health consequences (productivity, caring responsibilities) are important.  Although there have been attempts to produce quantified adjustments, for example modifications to QALY gains, to account for distributional objectives, at present equity issues are considered qualitatively.

 

How to cite: Equity [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/equity/

 

Contact us today if you would like to be kept updated with our latest training courses:

Evidence-based medicine is the deliberate and explicit use of the current best evidence in combination with clinical knowledge and experience when making decisions on patient care, rather than basing clinical decisions solely on tradition or theoretical reasoning. Coined by a group from McMaster University in the late 1980s, the aim of ‘Evidence-based medicine’ aims to make clinical practice more grounded in up-to-date science, and therefore more safe, consistent, and cost-effective. Evidence based research recognises the inability of traditional authorities (especially text books) to keep pace with rapid development of evidence, the emerging availability of information technology to provide access to evidence, and the need for strong critical appraisal skills to evaluate and interpret the results of different types in real time. A strong emphasis was placed on the importance of RCTs as sources of unbiased evidence to inform evidence based research, and metrics such as ‘numbers needed to treat’ to summarise and communicate evidence in clinical settings. Writing in the BMJ in 1996, Sackett et al. defined evidence-based medicine as “requir[ing] a bottom up approach that integrates the best external evidence with individual clinical expertise and patients’ choice”.

 

How to cite: Evidence-Based Medicine [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/evidence-based-medicine/

 

Contact us today if you would like to be kept updated with our latest training courses:

The expected value of partially perfect information is the price that a healthcare decision maker would (in theory) be willing to spend in order to gain perfect information for one or more factors (i.e. inputs to an economic model) which may influence which treatment alternative is preferred, based on the analysis of uncertainty in the relevant cost-effectiveness analysis. The EVPPI is calculated as the difference in the monetary value of health gain associated with a decision between therapy alternatives (and represented in an economic model) between when a choice is made on the basis of currently available information (i.e. uncertainty in the factor(s) of interest) and when the choice is made based on perfect information (no uncertainty in these factors). EVPPI is closely related to EVPI (and EVSI) and is primarily used to assess the value of collecting further information, especially when this may involve expensive trials, registries or other observational studies.

 

How to cite: Expected Value of Partially Perfect Information (EVPPI) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/expected-value-of-partially-perfect-information-evppi/

 

Contact us today if you would like to be kept updated with our latest training courses:

The expected value of perfect information is the price that a healthcare decision maker would be willing to pay to have perfect information regarding all factors that influence which treatment choice is preferred as the result of a cost-effectiveness analysis. This is the value (in money terms) of removing all uncertainty from such an analysis. EVPI is calculated as the difference in the monetary value of health gain associated with a decision between therapy alternatives between when the choice is made on the basis of with currently available information (i.e. uncertainty in the factors of interest) and when the choice is made based on perfect information (no uncertainty in all factors).

 

How to cite: Expected Value of Perfect Information (EVPI) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/expected-value-of-perfect-information-evpi/

 

Contact us today if you would like to be kept updated with our latest training courses:

The expected value of sample information estimates the value of a decision to collect additional sample information.  Typically, additional research reduces, rather than eliminates uncertainty meaning perfect information is not available.  Therefore, EVSI can be utilised to help determine the optimal research design (study population, comparison to be tested, sample size) to maximise both the reduction in uncertainty and the value to the society of conducting the study.

 

How to cite: Expected Value of Sample Information (EVSI) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/expected-value-of-sample-information-evsi/

 

Contact us today if you would like to be kept updated with our latest training courses:

F

A fast track appraisal (FTA) is a form of single technology appraisal (STA) undertaken by UK NICE with a shorter process time to speed up access to promising new treatments, and which enables prioritisation of assessment resources to be devoted. NICE makes an early assessment as to the likely cost-effectiveness of the new intervention.  If the criteria for an FTA are met a shortened process is followed, delivering a final determination in 32 weeks, 16 weeks less than for a conventional single technology appraisal [link].  The only difference for those making submissions is that proposals for patient access scheme need to be included in the company evidence submission.

 

How to cite: Fast Track Appraisal (UK NICE) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/fast-track-appraisal-uk-nice

 

Contact us today if you would like to be kept updated with our latest training courses:

H

In economic models that use Markov-type processes, it is generally recommended that a ‘half-cycle correction’ be built into the analysis, to account for the fact that events and transitions can occur at any point during the cycle, not necessarily at the start or end of each cycle.  For example, if we know that 100 people are alive at month ten, and that 90 people are alive at month eleven, we do not necessarily know at what point those 10 patients died between months ten and eleven.  In such cases, it is usual to assume that the event occurred at the mid-point of the cycle.  However, for many health events, the implications of the event may not actually become apparent until the next cycle. For instance the increased costs associated with disease progression may not occur until progression is clinically confirmed, which may only happen at regular routine follow-up visits (i.e. at the start or end of a cycle).  Likewise, if packs of medicine are prescribed on a monthly basis, then a monthly cost to the healthcare system would occur in full, no matter what point the person died within the cycle.  Therefore, it is usually recommended that half cycle correction is applied carefully, and only to those aspects where the timing of the event and its consequences are not known.

 

How to cite: Half-Cycle Correction [online]. (2016). York; York Health Economics Consortium; 2016. http://www.yhec.co.uk/glossary/half-cycle-correction/

 

Contact us today if you would like to be kept updated with our latest training courses:

Hand searching refers to retrieval methods relying more on the comprehensive browsing and scanning of all content in a specific journal issue or supplement, conference or web site. It is usually done where these items cannot be searched effectively with a database. In the past hand searches were done with a physical hard copy. However the proliferation of electronic sources and the improved quality of some search interfaces can now mean that it is performed online.

 

How to cite: Hand Searching [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/hand-searching/

 

Contact us today if you would like to be kept updated with our latest training courses:

Health economics is a field of economics that focuses on the ‘analysis and understanding of efficiency, effectiveness, values and behaviours involved in the production and consumption of health and healthcare’. Health economists are interested in the efficient design of healthcare systems (including insurance systems), the economic evaluation of health technologies, health-related behaviours and the impacts of incentives, financial and otherwise, to modify these behaviours. Kenneth Arrow, a founding father of health economics, pointed out in 1963 that health and healthcare differ from other areas of the economy in that there is extensive government intervention, a great deal of uncertainty in several dimensions, asymmetric information, barriers to entry, externalities and the presence of a third-party agent (physician). As a result purchasing decisions are made without direct reference to the price of the product or service. Therefore, the economics of health and healthcare contains some unique characteristics. In particular economic evaluations of health technologies tend not be cost-benefit analyses (in which resources used and benefits obtained are monetized) but cost-effectiveness analyses, which are rooted in ‘extra-welfarist’ principles set out by Culyer and others, and rely on methods and principles drawn from clinical trials, health services research and epidemiology. Health technology assessment draws mostly on the methods of economic evaluation in healthcare, but other aspects of health economics and econometrics (analytical techniques) may be used to inform healthcare policy and design of public and private health care systems.

 

How to cite: Health Economics [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/health-economics/

 

Contact us today if you would like to be kept updated with our latest training courses:

Health Economics and Outcomes Research (HEOR) is the most common label given to the function within pharmaceutical and life science companies with the responsibility for generating evidence of value of new interventions for reimbursement agencies and local health care payers. While ‘HE’ refers mostly to skills in economic evaluation ‘OR’ may refer to expertise in observational studies or in the development and use of new health outcomes measurements, especially PROs. HEOR staff will advise on the design of trials to best meet the needs of healthcare payers, as well as on other studies and analyses (meta-analyses, modelling, observational studies) that may be required. They will ensure that these studies are executed, often using external researchers, and will communicate the results internally and externally. If based in specific countries they may be responsible for creating and delivering submissions to reimbursement agencies or local healthcare payers. HEOR staff work closely with specialists in Market Access.

 

How to cite: Health Economics and Outcomes Research (HEOR) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/health-economics-and-outcomes-research-heor/

 

Contact us today if you would like to be kept updated with our latest training courses:

Health Technology Assessment (also known as HTA) is the multi-disciplinary evaluation of the clinical effectiveness and/or cost-effectiveness and/or the social and ethical impact of a health technology on the lives of patients and the health care system. Information is summarised in a comprehensive, systematic, transparent, unbiased and robust manner.

The main purpose of health technology assessment is to inform health care decision-makers, in particular regarding the reimbursement of new drugs and other health care interventions.  Health technology assessment produces advice about whether a health technology should be used, and if so how it is best used, which patients will benefit and what is the magnitude of health benefit in relation to the cost.  While all health technology assessment frameworks include assessments of unmet need, clinical effectiveness and the financial/budget impact of a new technology, many also include assessment of cost-effectiveness and consider other aspects (sometimes referred to as ‘domains’), such as wider social impact, equity, ethical and legal issues.  Health technology assessment differs from appraisal: the former is the analytical process of gathering, assessing and summarising available information, whereas appraisal refers to the political process of producing guidance, taking into account the assessment as well as other factors (such as social values, political/policy, availability of resources for implementation).

 

How to cite: Health Technology Assessment [online]. (2016). York; York Health Economics Consortium; 2016. http://www.yhec.co.uk/glossary/health-technology-assessment/

 

Contact us today if you would like to be kept updated with our latest training courses:

Highly Specialised Technologies (HST) are interventions for very rare conditions, for which NICE in UK took over responsibility for assessment in 2013.  The technologies are assessed by a separate specialist HST committee, using a wider framework than for single technology assessments (STAs).  This framework recognises the difficulty in undertaking clinical trials for these therapies and the resulting uncertainty in estimates of cost-effectiveness, as well as the potential impact of the technologies beyond direct health benefits, the implications for service delivery and the difficulties for manufacturers in recouping ‘sunk’ R&D costs.  A more generous cost-effectiveness threshold of £100,000 per QALY gained (rising to £300,000 in some circumstances) is applied.  Outcomes-based managed access agreements [link] are often feasible, given the ability to follow up the (small) populations of treated patients.  Of great interest are the criteria for determining entry to HST, which include size of the target patient group, the nature of the condition, the need for a specialised service supported by national commissioning and cost of the technology itself.

 

How to cite: Highly Specialised Technologies (UK NICE) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/highly-specialised-technologies-uk-nice/

 

Contact us today if you would like to be kept updated with our latest training courses:

A statistical hypothesis test is a method of statistical inference in which two datasets obtained by sampling are compared, or a data set is compared against a synthetic data set based on an idealized model (data distribution) describing a population. A null hypothesis proposes no relationship between these two datasets, with an alternative hypothesis that there is indeed a relationship. In a hypothesis test a test statistic is calculated and compared with a pre-defined critical value, such as the significance level (a). If the statistic falls above the critical value the null hypothesis is deemed to be rejected. In this case it is unlikely that the reported observations would occur if the null hypothesis were true (we cannot ever say that the null hypothesis is certainly ‘false’). By historical convention a critical value of 5% is used for a. A common example is the use of Student’s t test to compare two sample means, summarising the treatment outcome reported for each arm of a clinical trial. The null hypothesis is that there is no significant difference between these values (i.e. no treatment effect), and the distribution of the treatment outcome is assumed to be normal with the same variance in each study arm. If the value of t calculated from the means for the two study arms falls below the reference value (for a given a and sample size) then the null hypothesis cannot be rejected, and it cannot be concluded that there is likely to be a treatment effect.

How to cite: Hypothesis Testing [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/hypothesis-testing/

 

Contact us today if you would like to be kept updated with our latest training courses:

I

Incidence quantifies the number of new cases of a diseases or events occurring in a specified time period, often a year, to a defined population who are at risk of the disease/event. It is given as a rate. Cumulative incidence describes the proportion of those at risk who develop the disease, or experience an event, over a specified period of time (often aggregated over a number of years). Incidence density, sometimes called force of morbidity or mortality, is a more precise concept in which those who develop a disease are removed from the eligible population as they are no longer eligible to develop the disease. So the denominator for incidence density becomes the aggregated person-time of eligibility (‘person-years of exposure’), rather than the number of individuals eligible at the start. Specific types of incidence rate are mortality rates (deaths in population), morbidity (non-fatal disease in population), case fatality rates (deaths in diseased population) and attack rate (cases of disease in a population at risk, usually over a short period of observation). Comparisons of incidence rates may be misleading if there are differences in risk factors (for example, the age distribution) between the populations compared. In this case various techniques such as standardisation (often used to compare mortality), or regression may be used to adjust for these known differences.

 

How to cite: Incidence [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/incidence/

 

Contact us today if you would like to be kept updated with our latest training courses:

An incremental cost-effectiveness ratio is a summary measure representing the economic value of an intervention, compared with an alternative (comparator). It is usually the main output or result of an economic evaluation. An ICER is calculated by dividing the difference in total costs (incremental cost) by the difference in the chosen measure of health outcome or effect (incremental effect) to provide a ratio of ‘extra cost per extra unit of health effect’ – for the more expensive therapy vs the alternative. In the UK the QALY is most frequently used as the measure of health effect, enabling ICERs to be compared across disease areas, but in other healthcare systems other measures of health effect may be used. In decision-making ICERs are most useful when the new intervention is more costly but generates improved health effect. ICERs reported by economic evaluations are compared with a pre-determined threshold (see Cost-effectiveness threshold) in order to decide whether choosing the new intervention is an efficient use of resources.

 

How to cite: Incremental Cost-Effectiveness Ratio (ICER) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/incremental-cost-effectiveness-ratio-icer/

 

Contact us today if you would like to be kept updated with our latest training courses:

An indirect treatment comparison is a method of deriving a comparative estimate between two treatments (treatment A and treatment B) which are not directly compared in head to head trials (or other studies), but which have both been compared to another intervention (treatment C). Treatments A and B can be indirectly compared via the common comparator C.

 

How to cite: Indirect Treatment Comparison (ITC) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/indirect-treatment-comparison-itc/

 

Contact us today if you would like to be kept updated with our latest training courses:

Intention-to-treat (ITT) analysis refers to analysis based on the initial treatment assignment, and not on the treatment eventually received. This type of analysis, now widely accepted as standard for the analysis of clinical trials, provides an unbiased comparison across the treatment groups. If cross-overs or drop-outs from the clinical trial are not random and imbalanced across treatment groups (i.e. potentially related to characteristics of the new intervention) then comparisons of groups as treated (‘On Therapy’- OT) may suffer from bias. When using the efficacy results from trials in economic models, care needs to be taken to select the most appropriate efficacy values ITT or OT) from the source studies, especially if movements between treatment groups of cohort members is being considered explicitly in the model. A useful validation check is see how well the model replicates the results of the source study.

 

How to cite: Intention to Treat [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/intention-to-treat/

 

Contact us today if you would like to be kept updated with our latest training courses:

Internal consistency is the degree of interrelatedness within items in a multi-item measure or tool such as a Patient Reported Outcome Measure (PROM) and the consistency of an individual item in a measure. There are three types of internal consistency reliability, and each type is a statistical measure. The three internal consistency measures are Cronbach’s Alpha, split-half test, and Kuder-Richardson test. The typical measure is the Cronbach’s Alpha which has a range of 0-1. The closer to 1, the more reliable the assessment. The scale determines how much agreement each item in a test has. The more agreement, the more the question are aligned or alike. The scale is as follows: 0.00-0.69=Poor alignment, 0.70-0.79= Fair alignment, 0.80-0.89= Good alignment, 0.90-0.99= Strong alignment.

How to cite: Internal Consistency Reliability [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/internal-consistency-reliability/

 

Contact us today if you would like to be kept updated with our latest training courses:

An intervention review is a systematic review of evidence from health research around the effects of a specific intervention in a specified population and/or setting. Methodological guidance on undertaking this kind of review is provided in The Cochrane Handbook for Systematic Reviews of Interventions

 

How to cite: Intervention Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/intervention-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

Item response models may refer either to item-response theory (IRT) or Rasch models. Although not synonymous, both forms of model are measurement or psychometric methods that can be applied to a wide range of data, including that derived from patient-reported outcome (PRO) measures. A particular strength of these models is that their focus is (as the name implies) on the item-level rather than test-level (as seen in traditional or classical test theory). This has a number of benefits, the two most important of which are 1) researchers can identify (and potentially remove or amend) poorly performing items, for example those that are not relevant to the target audience, and 2) item banks (drawn from many PRO measures) together with computer-adaptive testing (CAT) may enable the creation of shorter, more relevant, but equally accurate tests.

 

How to cite: Item Response Models [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/item-response-models/

 

Contact us today if you would like to be kept updated with our latest training courses:

L

In diagnostic studies, it is often reported that early detection of certain conditions can lead to improved outcomes, such as improved survival.  However, the fact that survival is usually measured from the point of diagnosis, it is not always possible to make accurate comparisons of the total survival (i.e. from onset of disease rather than from diagnosis).  Lead time bias refers to challenge of disentangling the effects of increased survival due to optimal treatment that was made possible by diagnosis, and apparent increased survival simply because the patient was followed from an earlier timepoint.  If the delay period (between early diagnosis and late diagnosis, usually when symptoms are apparent) is known, it is relatively easy to make adjustments to survival estimates.  However, in most cases the delay period is not properly known.  To solve this, diagnostic studies might be designed as randomised controlled trials, with patients either randomised to early testing or to a non-testing arm.  However, such designs would face ethical challenges, and would likely take many years to provide mature data.

How to cite: Lead Time Bias [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/lead-time-bias/

 

Contact us today if you would like to be kept updated with our latest training courses:

A literature review is a search and evaluation of available published studies in a chosen topic area. Occasionally the ‘grey’ literature: unpublished reports, newsfeeds, web sites etc. may be included. In economic evaluation, literature reviews are used to identify and summarise data and outcomes for a wide range of purposes, e.g. collating and summarising the results of clinical or economic evaluations of a specific health intervention, or identifying data inputs for possible use in future economic modelling. Literature reviews vary widely in their scope and quality, which may depend in part on the purpose and the resources (cost, time) available. Reviews range from short pragmatic reviews to systematic literature reviews, which provide a more robust and comprehensive answer to the review question and which are frequently required in submissions presented to reimbursement agencies. Some reviews may also include a quantitative synthesis (meta-analysis) of the identified data.

 

How to cite: Literature Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/literature-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

M

Managed access agreements (MAAs) are a version of conditional reimbursement or coverage with evidence development used in the UK NHS.  They constitute agreements between NHS England and sponsors of new technologies manufacturers that enable new interventions (usually drugs) to become available for a limited time period at a discounted price.  These arrangements are co-ordinated by NICE, specifically via the Cancer Drugs Fund for oncology medicines.  Clinicians and patient advocacy groups are involved, as well clinical representatives of NHS England.  MAA refers to an arrangement that addresses a significant area of uncertainty in the evidence base as identified by the technology evaluation Committee at NICE.  MAAs have been used in many single technology assessments (STAs) and are anticipated expected for most highly specialised technologies (HST).  MAA proposals include an agreed rationale and duration for the arrangement, populations covered (in particular where they come in the care pathway), clear criteria for starting and stopping the new therapy, definition of outcomes, methods of data collection and frequency of reporting, together with a commercial proposition (price discount), financial risk management plans and an understanding of what will happen if reimbursement is eventually withdrawn.

 

How to cite: Managed Access Agreement [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/managed-access-agreement/

 

Contact us today if you would like to be kept updated with our latest training courses:

Market access refers to the process of ensuring that treatments (medicines, medical devices etc.) for which marketing authorisation has been obtained from regulatory authorities are available (reimbursed, funded) to all patients who may benefit. A clinician is in practice able to recommend and administer the treatment to a patient. A first step is reimbursement, frequently informed by health technology assessment (HTA). However successful reimbursement this does not mean that all eligible patients will receive the new treatment, nor will they necessarily have ‘access’ to it. Market access addresses this problem by assessing barriers to uptake and proposing and implementing strategies overcome these barriers. These may include collection and communication of evidence relevant to different decision makers, implementing pricing strategies (discounts, payment by results), or provision of tools (apps etc.) or staff to provide expert advice to health system administrators. In pharmaceutical and other life science companies Market Access and Health economics and Outcomes Research professionals will generally work collaboratively to develop and execute plans for generating and communicating evidence of value of new interventions to health care reimbursers and payers.

 

How to cite: Market Access [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/market-access/

 

Contact us today if you would like to be kept updated with our latest training courses:

The Markov model is an analytical framework that is frequently used in decision analysis, and is probably the most common type of model used in economic evaluation of healthcare interventions. Markov models use disease states to represent all possible consequences of an intervention of interest. These are mutually exclusive and exhaustive and so each individual represented in the model can be in one and only one of these disease states at any given time. Examples of health states that might be included in a simple Markov model for a cancer intervention are: progression-free, post-progression and dead. Individuals move (‘transition’) between disease states as their condition changes over time. Time itself is considered as discrete time periods called ‘cycles’ (typically a certain number of weeks or months), and movements from one disease state to another (in the subsequent time period) are represented as ‘transition probabilities’. Time spent in each disease state for a single model cycle (and transitions between states) is associated with a cost and a health outcome. Costs and health outcomes are aggregated for a modelled cohort of patients over successive cycles to provide a summary of the cohort experience, which can be compared with the aggregate experience of a similar cohort, for example one receiving a different (comparator) intervention for the same condition. Markov models are limited in their limited ability to ‘remember’ what occurred in previous model cycles. For example the probability of what occurs after disease progression may be related to the time to progression. Although to some extent health states can be defined ingeniously to address this complexity, other modelling approaches may be required for more complex diseases.

 

How to cite: Markov Model [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/markov-model/

 

Contact us today if you would like to be kept updated with our latest training courses:

Meta-analysis is a statistical technique for combining data from independent studies to produce a single estimate of effect and associated uncertainty around this estimate. In the context of health technology assessment source studies are usually randomised controlled trials – RCTs. Meta-analysis can be used whenever there is more than one study that has estimated the effect of an intervention (or risk factor) using the same outcome measure, and source studies are sufficiently similar in terms of the participants, interventions compared, settings, duration and definition and measurement of the outcome measure, so that it reasonable to combine the results of these studies. This can be assessed qualitatively in a systematic review prior to the quantitative synthesis, and heterogeneity in the results of the source trials can be tested statistically. Results of meta-analyses are usually reported based on fixed and random effects modelling, and results are displayed graphically using a Forest plot. The PRISMA Checklist has been developed to guide reporting of meta-analyses more broadly, and a systematic approach to undertaking meta-analyses of RCTs in many areas of healthcare has been co-ordinated by the Cochrane collaboration. More recently meta-analysis techniques have been further developed to support comparisons where head to head trials (direct evidence) is lacking.

 

How to cite: Meta-Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/meta-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Micro-simulation is a form of economic modelling where modelled individuals are passed through the model one-by-one, their results are stored and then the experience of a cohort is obtained by aggregating the individual results. This is in contrast to most Markov modelling where the full cohort’s experience is considered in a single pass through the model. Micro-simulation models are particularly useful when individuals have a mix of interrelated (and potentially changing) risk factors that influence their experience of a (chronic) disease over time, or where interactions between individuals is important (e.g. infectious disease). Although more complex to create, they may have more general application (than cohort Markov models), in particular being applicable to cohorts with different characteristics (risk factor mix) at the start of the modelled period.

 

How to cite: Micro-Simulation [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/micro-simulation/

 

Contact us today if you would like to be kept updated with our latest training courses:

When assessing the clinical utility of therapies intended to improve subjective outcomes, the amount of improvement that is important to patients must be determined. The smallest benefit of value to patients is called the minimal clinically important difference (MCID). The MCID is a patient-centered concept, capturing both the magnitude of the improvement and the value patients place on the change. Using patient-centered MCIDs is important for studies involving patient-reported outcomes, for which the clinical importance of a given change may not be obvious to clinicians selecting treatments. The MCID defines the smallest amount an outcome must change to be meaningful to patients.

How to cite: Minimally Clinically Important Differences (MCID) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/minimally-clinically-important-differences-mcid/

 

Contact us today if you would like to be kept updated with our latest training courses:

Missing values occur when no data value is stored for a variable of interest in a study observation. In most study datasets there are missing values, and the amount and type of missing values can have an important impact on the conclusions that can be drawn from the data. If the likelihood of data being missing is related a. to the study outcome or b. to other explanatory variables, then the study results will be biased. For example, if the prevalence of a health condition is related to age and older people are less likely to report whether they have the condition (a) then the study will under-report the prevalence of the condition.  If older people are less likely to provide their age (b), then the study will under-report the relationship between the health condition and age. Data are classified as ‘missing completely at random’ (MCAR) if the chances of them being missing is independent of observable variables and of unobservable parameters of interest. In this case the analysis will not be biased (but will be more uncertain). Data are (somewhat confusingly) called ‘missing at random’ (MAR) if the chances of them being missing is not associated directly with the outcome of interest but may be accounted for by variables for which there is complete information. Otherwise missing data are ‘missing not at random’ (MNAR). For MNAR and MAR it is important to understand the patterns of missingness and to use appropriate statistical techniques to control for possible bias (in the case of MAR some statistical analyses may be unbiased). Simple exclusion of study subjects with missing values will usually maintain or increase the bias in the results. Typical methods used to adjust for missing values include imputation (various methods exist), partial deletion, inverse propensity weighting or more complicated maximum likelihood estimation. In economic modelling, sensitivity analysis may be used to explore the impact of ‘best case’ and ‘worst case’ assumptions about missing data in source studies.

How to cite: Missing Values [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/missing-values/

 

Contact us today if you would like to be kept updated with our latest training courses:

A mixed treatment comparison (MTC) is a statistical method that uses both direct evidence (from trials directly comparing the interventions of interest) and indirect evidence (from trials comparing each intervention of interest with a further alterative) to estimate the comparative efficacy and/or safety of interventions for a defined population. They are generalisations of traditional meta-analysis, and the analytical methods employed are now frequently referred to as ‘network meta-analysis’.

 

How to cite: Mixed Treatment Comparison [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/mixed-treatment-comparison/

 

Contact us today if you would like to be kept updated with our latest training courses:

Monte-Carlo simulation is a form of modelling used in many areas of science where model inputs are drawn from distributions and are not treated as fixed values. Key elements of a Monte-Carlo simulation are to (a) define a domain of possible inputs (parameter); (b) generate input values randomly from probability distributions across the domain; (c) perform a deterministic computation of the model output based on the selected inputs; (d) repeat for a sufficient number of ‘draws’ of input values; (e) aggregate the results. In health care evaluations micro-simulations frequently contain Monte-Carlo elements, for example using probability distributions to construct cohorts of patients with mixes of risk factors that may impact on their future experience. Probabilistic sensitivity analysis is a form of Monte-Carlo simulation where parameter values are varied stochastically to estimate the distribution of the model output value.

 

How to cite: Monte-Carlo Simulation [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/monte-carlo-simulation/

 

Contact us today if you would like to be kept updated with our latest training courses:

Multi-criteria decision analysis is domain of operational research that is beginning to be used in healthcare decision-making. The technique recognises that decision-makers use multiple and disparate criteria when making decisions (for example about introducing new health care interventions or facilities etc.), and that it is important to make explicit the impact on any decision of all the criteria applied and the relative importance attached to them. In MCDA criteria affecting a decision are identified and weighted using explicit, transparent techniques. Then different options (strategies, interventions etc.) are scored against each criterion and the weights are used to provide summary scores for comparative purposes. MCDA has been found to be attractive in health technology assessment, especially in healthcare systems where there is reluctance to primarily use a single decision metric (such as the ICER). It helps to make more transparent assumptions underpinning decisions, which in principle may improve accountability and consistency of decision-making. There are some technical issues around defining and weighting decision criteria scoring performance against criteria for health care interventions, which have resulted in its uptake in healthcare being quite slow. One area of recent application is benefit-risk analysis.

 

How to cite: Multi-Criteria Decision Analysis (MCDA) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/multi-criteria-decision-analysis-mcda/

 

Contact us today if you would like to be kept updated with our latest training courses:

Multi-way sensitivity analysis is a technique that accounts for the fact that more than one parameter in a model is uncertain and the output of the model is sensitive to the choice of value for each of these parameters (commonly the key parameters in the model). During multi-way sensitivity analysis the value of each parameter is changed simultaneously, and the impact that this has on the model output is investigated. This technique is limited to investigating the impact of changing relatively few (2 to 5) parameters simultaneously, given that the number of possible combinations of parameter values has the potential to get very large.

 

How to cite: Multi-way Sensitivity Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/multi-way-sensitivity-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

A Multiple technology appraisal (MTA) undertaken by UK NICE is one where a comparison is made between one or more technologies used in one or more indications of interest.  The evidence assessment is undertaken by an independent academic group (Assessment Group – AG).  Formal consultees are invited to provide submissions: this includes technology sponsors as well clinical experts, NHS commissioning experts and patient experts. The MTA process is approximately 16 weeks longer than the single technology assessment (STA) process.  Given this longer timeline and the independence of timings of marketing authorisation for new medicines for the same indications, MTAs are much less common than STAs for pharmaceutical therapies.  Detailed guidance on the MTA process (2014 process guide) is available at:

https://www.nice.org.uk/Media/Default/About/what-we-do/NICE-guidance/NICE-technology-appraisals/technology-appraisal-processes-guide-sept-2014.pdf

 

How to cite: Multiple Technology Appraisal (UK NICE) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/multiple-technology-appraisal-uk-nice/

 

Contact us today if you would like to be kept updated with our latest training courses:

N

When individuals at risk of a condition are administered a diagnostic or screening test for a condition of interest, the negative predictive value (NPV) is the proportion of those who test negative who indeed do not have the condition (true negatives). This statistic is influenced both by the sensitivity and specificity of the test itself and the prevalence of the condition in those tested. The importance of NPV in economic modelling of diagnostic tests is that 1-NPV gives the proportion of those who are unlikely to receive further tests or an intervention who could have potentially benefited (false positives). In particular in the case of screening the reassurance of a negative test result may result in delayed diagnosis and treatment. NPV is closely related to positive predictive value (PPV).

 

How to cite: Negative Predictive Value [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/negative-predictive-value/

 

Contact us today if you would like to be kept updated with our latest training courses:

Net health benefit (NHB) is a summary statistic that represents the impact on population health of introducing a new intervention.  Net health benefit assumes that ‘lost health’ can be estimated as an ‘opportunity cost’ to represent the health that is foregone elsewhere as a result of moving funding to pay for a new intervention.  NHB is usually measured using QALYs and is calculated by: incremental gain in QALYs – (incremental cost / opportunity cost threshold).  A positive NHB implies that overall population health would be increased as a result of the new intervention, whilst a negative NHB implies that the health benefits of the new intervention are not sufficient to outweigh the health losses that arise from the healthcare that ceases to be funded in order to fund the new treatment.

 

How to cite: Net Health Benefit [online]. (2016). York; York Health Economics Consortium; 2016. http://www.yhec.co.uk/glossary/net-health-benefit/

 

Contact us today if you would like to be kept updated with our latest training courses:

Net monetary benefit (NMB) is a summary statistic that represents the value of an intervention in monetary terms when a willingness to pay threshold for a unit of benefit (for example a measure of health outcome or QALY) is known. The use of NMB scales both health outcomes and use of resources to costs, with the result that comparisons without the use of ratios (such as in ICERs) can be made. NMB is calculated as (incremental benefit x threshold) – incremental cost. Incremental NMB measures the difference in NMB between alternative interventions, a positive incremental NMB indicating that the intervention is cost-effective compared with the alternative at the given willingness-to-pay threshold. In this case the cost to derive the benefit is less than the maximum amount that the decision-maker would be willing to pay for this benefit.

 

How to cite: Net Monetary Benefit [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/net-monetary-benefit/

 

Contact us today if you would like to be kept updated with our latest training courses:

Network meta-analysis is a statistical method using both direct and indirect evidence (conventionally from randomised controlled trials) to estimate the comparative efficacy and/or safety of a number of interventions with each another.  A network meta analysis will usually contain multiple treatments and multiple sources of evidence. Typically a systematic review is used to assemble all trial evidence for efficacy/safety of the interventions of interest in the population/condition and outcome measure of interest into an evidence network that will inform the network meta analysis. At this stage the comparability of populations, duration, outcome definitions and the feasibility of the statistical analysis for the network meta analysis is assessed. The reported differences in the outcome measure between interventions (and corresponding measure of uncertainty) in each trial are combined using Monte-Carlo Markov chain methods. In this way the benefit of randomisation in each source study is preserved when undertaking the network meta analysis.

 

How to cite: Network Meta-Analysis (NMA) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/network-meta-analysis-nma/

 

Contact us today if you would like to be kept updated with our latest training courses:

A non-inferiority study is one where the aim is to show that the effectiveness of one technology is not inferior to a comparator technology by a clinically important amount. This non-inferiority margin (M2: also called the ‘preserved fraction’ or ‘degree of inferiority’) is the percentage of the effect size of the current or comparator technology against a placebo comparator (M1) that is clinically acceptable to be maintained for non-inferiority to be assumed. M1 and M2 need to be established prior to a study, to allows a sample size to be calculated with sufficient power to support a conclusion that the intervention technology is non-inferior or equivalent to the comparator. Particular attention needs to be paid to the derivation of M1 and M2 and the quality of the study design and execution. Unlike for superiority studies, intention to treat analyses are unlikely to be conservative (per protocol analyses may need to be performed) and statistical hypothesis testing may need to be one-sided at the 2.5% level. Non-inferiority and clinical equivalence studies have become more common in recent years for interventions where placebo comparators cannot be used for ethical reasons, or where there is interest by health care payers in establishing comparative effectiveness of a new technology.

How to cite: Non-Inferiority (Study) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/non-inferiority-study/

 

Contact us today if you would like to be kept updated with our latest training courses:

Non-parametric statistical procedures rely on no or few assumptions about the shape of the distribution of the measured characteristic of interest in the underlying population. Commonly used non-parametric tests are Wilcoxon, Mann-Whitney and Kruksal-Wallis. Reasons for using such tests may be that the sample size is very small, the data are ordinal/ranked in nature, or that there is a skewed distribution (e.g. survival, income) with extreme outliers (long ‘tail’) and a summary statistic such as median may be of more value than a mean. Non-parametric tests generally have less power (i.e. will require larger sample sizes) than the corresponding parametric tests if the data are truly normal, and interpretation of the results of such procedures can also be more difficult. Generally, non-parametric methods are of limited use for economic evaluation, where the focus is more on estimation (to support decision-making) than on hypothesis testing. Nevertheless, bootstrapping is a useful non-parametric technique, and direct use of (Kaplan-Meier) survival data from source studies to estate survival of a modelled cohort may also be considered to be non-parametric, in contrast to the use of parametric functions.

How to cite: Non-Parametric (Tests) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/non-parametric-tests/

 

Contact us today if you would like to be kept updated with our latest training courses:

Number needed to treat (NNT) is number of patients who need to receive an intervention of interest (compared with an alternative) order for one unit of health outcome to be gained – or one unit to be prevented in the case of an adverse outcome. For example an NNT of 50 to prevent a myocardial infarction (MI) means that 50 patients need to be treated for 1 MI to be prevented –or, for every 1,000 patients treated 20 MIs are expected to be prevented.  NNT is numerically equivalent to the reciprocal of the absolute risk reduction (ARR). NNTs have been popularised as a central concept in evidence based medicine (clinical epidemiology), as a simple way of communicating the results of clinical trials. They requires a defined endpoint (maybe less good for chronic, progressive conditions) and are time-specific: in the case above the duration of treatment and time period over which MIs may be prevented need to be specified. The degree of patient benefit associated with different endpoints may vary. As similar NNTs may be generated with different combinations of underlying risk and risk reductions, they should not be used directly in meta-analysis.

 

How to cite: Number Needed to Treat [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/number-needed-to-treat/

 

Contact us today if you would like to be kept updated with our latest training courses:

O

An odds ratio (OR) is a measure of the proportional excess risk of an event in a population compared with the risk in another population. When the populations are defined by treatment choice (but otherwise identical, as in a clinical trial) this gives a measure of the relative effect of an intervention. The OR is the odds of an event occurring in the intervention group divided by the odds of the same event occurring in the comparison (control) group. (Odds are the number of subjects in a population experiencing an event divided by the number of subjects in the population not experiencing the event.) An odds ratio greater than one indicates that the event is more likely to occur in the intervention group than in the control group. An OR equal to 1 indicates that there is no difference between the groups (i.e. the event is equally likely to occur in the intervention group and control group). Compared with relative risks, odds ratios have statistical properties that make them especially useful for meta-analyses. However they may need to translated to relative risks if differences in absolute effect are required, for example in cost-effectiveness analyses. In the case of rarer events (e.g. occurring to <20% of the population) ORs and corresponding relative risks tend to be similar in magnitude.

 

How to cite: Odds Ratio [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/odds-ratio/

 

Contact us today if you would like to be kept updated with our latest training courses:

The opportunity cost of an intervention is what is foregone as a consequence of adopting a new intervention. In a fixed budget health care system where increased costs will displace other health care services already provided, the opportunity cost is measured as the health lost as a result of the displacement of activities to fund the selected intervention. In terms of choosing to fund intervention A over intervention B, the opportunity cost of choosing A would be the potential value or the difference (incremental benefits) of  A compared to B and the difference in cost (incremental cost) of A compared to B.  Often, when a new costly intervention is adopted within a health system, the opportunity cost (i.e. the health benefits displaced) will be unknown and unrelated to the intervention being adopted.

 

How to cite: Opportunity Cost [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/opportunity-cost/

 

Contact us today if you would like to be kept updated with our latest training courses:

Outcomes research is a term used to cover a broad range of areas of research, the primary focus being on assessing the effectiveness of health interventions and services. Examples in this domain cover healthcare delivery, cost-effectiveness, health, disease burden and so on. Patient-centred (sometimes known as patient-focused) outcomes research is an umbrella term used to summarise research into patients’ perspectives on their experiences in health care systems, for instance through development and use of patient-reported outcome (PRO) measures designed to capture the impact of treatment on symptoms and quality of life, or through measuring patient preferences to help (re)design hospital services.

 

How to cite: Outcomes Research [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/outcomes-research/

 

Contact us today if you would like to be kept updated with our latest training courses:

P

A p value is the probability of observing results at least as extreme as those reported in a study, assuming that the null hypothesis is true (usually ‘no difference’). In a hypothesis test this value is compared with the prespecified threshold for significance (a) in order to conclude whether there is evidence to reject the null hypothesis. Calculators are available to provide p values associated with study results expressed as Z scores (normal distribution), t-scores, chi squared and many other distributions. A common misapprehension is that the p-value supports reasoning about the probabilities of alternative hypotheses, whereas its use is primarily to decide whether or not to reject the null hypothesis.

How to cite: P Value [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/p-value/

 

Contact us today if you would like to be kept updated with our latest training courses:

Parametric refers to a broad classification of statistical procedures, including tests, which rely on assumptions about the ‘shape’ of the distribution of a measured characteristic of the underlying population, as well as the parameters used to describe that assumed distribution. A frequent assumption is that of an approximately normal distribution, described by its mean and standard deviation. Commonly used examples of parametric statistical procedures are t tests, analysis of variance (ANOVA) and all forms of regression. It is important to validate the assumptions associated with a parametric procedure, as incorrect conclusions can be made if the data deviate from these assumptions: in particular a parametric assumption of normality may be questionable for small sample sizes. In economic modelling parametric functions (such as Weibull, Gamma or exponential) are frequently used to represent overall survival or time to other important events such as disease progression or treatment discontinuation. These functions are used to project the experience of modelled cohorts beyond the duration of measured experience and can help in sensitivity analyses to assess the impact of parameter uncertainty on the model outputs.  Where data cannot be assumed to follow a specifically defined ‘shape’, then non-parametric tests should be used instead.

How to cite: Parametric (Tests) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/parametric-tests/

 

Contact us today if you would like to be kept updated with our latest training courses:

A partitioned survival model is a type of economic model used to follow a theoretical cohort through time as they move between a set of exhaustive and mutually exclusive heath states. Unlike a Markov model, the number of people in any state at successive points in time is not dictated by transition probabilities.  Instead, the model estimates the proportion of a cohort in each state based upon parametric survival equations. These types of model are frequently used to model cancer treatments, with separate survival equations for overall survival and progression-free survival. Common functions used to describe survival are exponential, Weibull or Gaussian (amongst others). Sensitivity analysis can be undertaken by varying the parameters defining the survival equations, however if the survival equations are independent, care needs to be taken that logical fallacies are not made (e.g. overall survival exceeding progression free survival).

 

How to cite: Partitioned Survival Model [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/partitioned-survival-model/

 

Contact us today if you would like to be kept updated with our latest training courses:

Patient access schemes (PASs) are confidential pricing agreements proposed by pharmaceutical companies to enable patients to gain access to drugs or other treatments that may not be considered to be cost-effective under normal circumstances.  In some forms, they are known as ‘risk sharing’ or ‘rebate’ schemes.  In effect, they are usually forms of simple price discounting, which enable companies to retain control of (undiscounted) list prices across countries but also, in principle, facilitate renegotiations of the discount should the need arise.  Principles and terms for these schemes were set out in the 2014 Pharmaceutical Price Regulation Scheme (PPRS).  More complex schemes include outcomes-based dose caps, rebates and upfront free stock.  Proposed schemes in England are presented to NICE (PAS Liaison Unit) which provides initial guidance, assesses them for feasibility, and then advices NHS England.  Similar arrangements for Scotland are co-ordinated by the Scottish Medicines Consortium.  If a medicine subject to a PAS becomes a comparator in a technology appraisal for a new intervention, it is the discounted (post-PAS) price that will need to be considered in that appraisal.

 

How to cite: Patient Access Scheme [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/patient-access-scheme/

 

Contact us today if you would like to be kept updated with our latest training courses:

A patient-level simulation is a type of model in which outcomes are estimated for modelled patients one at a time. In this model, the determination of outcomes is usually based on random (stochastic) selection of patients: a large number of patients are required to be simulated in order to estimate the mean outcomes (and their distribution) for the population considered in the analysis. Benefits of this type of model over cohort models is that it allows individual patient histories to be recorded, and the model can capture (first order) heterogeneity in the patient population. They are often considered more intuitive or more flexible than cohort models. One drawback may be the additional computational requirements for the model to run, particularly when running sensitivity analyses.

 

How to cite: Patient-Level Simulation Model [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/patient-level-simulation-model/

 

Contact us today if you would like to be kept updated with our latest training courses:

Patient-reported experience measures (PREMs) are psychometrically validated tools (e.g. questionnaires) used to capture patients’ interactions with healthcare systems and the degree to which their needs are being met. PREMs are designed to determine whether patients have experienced certain care processes rather than their satisfaction with the care received (which may be subject to bias). A PREM may, for instance, be used to collect information on the patient experience of hospital admission. Data derived from this could be used to inform service development and configuration.

 

How to cite: Patient-Reported Experience Measure (PREM) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/patient-reported-experience-measure-prem/

 

Contact us today if you would like to be kept updated with our latest training courses:

Patient-reported outcome measures (PROMS) or patient-reported outcome instruments (more commonly used in the US) are psychometrically validated tools, such as questionnaires used to collect PROs. In clinical trials, for example, PROMs may be used to collect PRO data to enable a pharmaceutical company (“sponsor”) to support a claim in the product labelling (USA) or summary of product characteristics (SmPC, Europe).

 

How to cite: Patient-Reported Outcome Measures (PROMs) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/patient-reported-outcome-measures-proms/

 

Contact us today if you would like to be kept updated with our latest training courses:

Patient-reported outcomes (PROs) are any reports directly from patients on their health, condition, etc. that is made solely by such patients without any input, suggestions or interpretation from their doctors, family, friends or other individuals. PRO is a blanket term relating to single or multidimensional aspects of patients’ symptoms, health, quality of life, treatment satisfaction, medication adherence, etc. PROs are often recorded in clinical trials, using validated instruments to measure the impact of the intervention as perceived by the patient.

 

How to cite: Patient-Reported Outcomes (PROs) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/patient-reported-outcomes-pros/

 

Contact us today if you would like to be kept updated with our latest training courses:

The perspective is the point of view adopted when deciding which types of costs and health benefits are to be included in an economic evaluation. Typical viewpoints are those of the patient, hospital/clinic, healthcare system or society. The broadest perspective is societal’, which reflects a full range of social opportunity costs associated with different interventions. In particular, this includes productivity losses arising from patients’ inability to work, and changes in these losses associated with a new therapy. In its reference case UK NICE recommends a perspective of ‘NHS and personal and social services’, recognising that the societal perspective may bias against those not in work, such as people over retirement age or those not able to work due to health reasons. The NHS perspective includes treatment costs such as medicine costs, administration and monitoring, other health service resource use costs associated with the managing the disease (e.g. GP visits, hospital admissions), and costs of managing adverse events caused by treatment. It does not include patients’ costs of obtaining care such as transportation, over the-counter purchases, co-payments or time off work. For NICE’s perspective on health outcomes, QALYs are based on the general population’s valuation of health outcomes (obtained through surveys), and not patients own valuations of their health states.

 

How to cite: Perspective [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/perspective/

 

Contact us today if you would like to be kept updated with our latest training courses:

When individuals at risk of a condition are administered a diagnostic or screening test for a condition of interest, the positive predictive value (PPV) is the proportion of those who test positive who indeed have the condition (true positives). This may also be called diagnostic precision. This statistic is influenced both by the sensitivity and specificity of the test itself and the prevalence of the condition in those tested. If the pre-test probability is the same as the prevalence, then the PPV is numerically the same as the post-test probability. PPV is important in economic modelling of diagnostic tests as it indicates the proportion of those who receive further tests or an intervention who can potentially benefit. Those who text positive without the disease (false negatives) may experience side effects of further tests or interventions without benefit. In information retrieval PPV is sometimes called the precision of the search strategy. PPV is closely related to negative predictive value (NPV).

 

How to cite: Positive Predictive Value [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/positive-predictive-value/

 

Contact us today if you would like to be kept updated with our latest training courses:

Statistical power (1–b) relates to the ability of a study to detect an effect (or an association between two variables) when there is indeed an effect there to be detected. This is the same as the probability of rejecting the study’s null hypothesis (see hypothesis testing). A high power (i.e. a low value for b) for a study means that there is a low risk of making a Type II error, and a low power (i.e. a high value for b) means that it is more likely that a meaningful clinical difference will remain in question after the study, as the study fails to reject the possibility of no difference. Power is important in the design of comparative studies, because it is used to determine the minimum sample size required, derived from the desired power, the minimum effect size and the desired significance level, and whether it is reasonable and ethical to proceed. Conventionally a power of 80% (b = 0.2) is used: this is based more on historical precedent and pragmatic considerations rather than statistical theory. Higher values for power may be acceptable for studies other than trials with lower risks to study participants. If the statistical power of a study is low, the results of the study may be questioned because the study may be considered to have been too small to detect any differences.

How to cite: Power [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/power/

 

Contact us today if you would like to be kept updated with our latest training courses:

A pragmatic review is one that adapts the conventional systematic review process to take into consideration limited time and/or resources available. This is usually achieved by applying additional limits to the search or eligibility criteria.

 

How to cite: Pragmatic Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/pragmatic-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

Many clinical trials of new healthcare interventions, especially those required to meet the requirements of regulatory authorities, are ‘explanatory’ in nature. This means that they are executed to strict protocols including tight inclusion and exclusion criteria, tightly controlled delivery of the intervention and comparator as well as any concomitant therapy, measures in place to ensure string adherence to the intervention, intensive follow-up data and a pre-specified analysis plan. The underlying purpose is that the trial should have the best chance of identifying differences in efficacy or safety. However there is increasing concern that effectiveness in ‘real world’ situations (routine practice) differs from that predicted from efficacy reported such trials. This has led to increased interest in more ‘pragmatic’ trials, where many of the strict protocol elements are relaxed, and results may be more immediately generalizable to routine practice.

 

How to cite: Pragmatic Trials [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/pragmatic-trials/

 

Contact us today if you would like to be kept updated with our latest training courses:

Precision can refer to a number of different concepts and so can be a confusing term. Diagnostic precision may refer to a test‘s positive predictive value or its reproducibility when repeated on the same sample (i.e. variation in results due to random error, often expressed as a coefficient of variation). More generally in statistics precision refers to the tightness of distribution (degree of spread around the average value) of a quantity, for example one measured in a population survey. Formally precision is defined as the reciprocal of the variance of this distribution.

 

How to cite: Precision [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/precision/

 

Contact us today if you would like to be kept updated with our latest training courses:

Preference-based measures (PBM) or generic preference-based measures are increasingly used in health economic evaluations to calculate quality-adjusted life years (QALYs). Such measures usually comprise a number of domains (or descriptive set) that patients can use to describe various aspects of their health (e.g. limitations in daily activities and mobility, pain and discomfort). These patient-reported values (profile scores) are then converted to an index score using a selected algorithm (sometimes country-specific). These algorithms area based on surveying the general public’s preferences for different combinations of health states, which is why these measures are referred to as “preference-based”. The index scores (sometimes called ‘utilities’) usually range between 0 and 1, where 1 is usually taken to reflect a valuation of ”perfect health” and 0 refers to valuation of “death”. In some of these measures values below zero may be possible, representing health states perceived to be worse than death. Examples of PBMs include the EQ-5D, SF-6D and the Health Utilities Index. The EQ-5D is NICE’s preferred instrument for cost-utility evaluations in healthcare technology assessments.

 

How to cite: Preference-Based Measures [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/preference-based-measures/

 

Contact us today if you would like to be kept updated with our latest training courses:

Preferred Reporting Items are the reporting items deemed by The PRISMA Group to be essential for maintaining a minimum level of reporting quality in systematic reviews and meta-analyses. They are detailed on the 27 item PRISMA Checklist and are split into the sections: Title, Abstract, Introduction, Methods, Results, Discussion and Funding. Each category contains guidance on the minimum information points (“items”) that should be reported in each section of a report or manuscript, to ensure a consistent minimum standard of reporting.

 

How to cite: Preferred Reporting Items [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/preferred-reporting-items/

 

Contact us today if you would like to be kept updated with our latest training courses:

Point prevalence is the proportion of individuals in a population who have a condition of interest at a specific point in time. Period prevalence is the proportion who experience the condition over a specified time-period. This differs from incidence as it includes those who already have the condition at the start if the time period. In the steady state (no epidemics) and when the prevalence in the population is quite low, Prevalence (P) and Incidence (I) are related: P = I x D, where D is the average duration of the disease. Cross-sectional surveys such as population censuses give detailed information on the prevalence of conditions or risk factors, but are less likely to be useful in describing the natural history or progression of a condition. In economic evaluation budget impact analyses tend to rely on prevalence information, whereas cost-effectiveness analyses are generally applicable to incident cohorts (with new developing or progressing disease).

 

How to cite: Prevalence [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/prevalence/

 

Contact us today if you would like to be kept updated with our latest training courses:

Probabilistic sensitivity analysis (PSA) is a technique used in economic modelling that allows the modeller to quantify the level of confidence in the output of the analysis, in relation to uncertainty in the model inputs. There is usually uncertainty associated with input parameter values of an economic model, which may have been derived from clinical trials, observational studies or in some cases expert opinion. In the base case analysis, the point estimate of each input parameter value is used. In the probabilistic analysis, these parameters are represented as distributions around the point estimate, which can be summarised using a few parameters (such as mean and standard deviation for a normal distribution). Different distributions are generally appropriate for different types of variable, where possible backed up by supporting evidence from source studies. For example measures of effect such as hazard ratios or relative risk reductions may be represented by a normal distribution, and survival curves by a Weibull distribution. In a PSA, a set of input parameter values is drawn by random sampling from each distribution, and the model is ‘run’ to generate outputs (cost and health outcome), which are stored. This is repeated many times (typically 1,000 to 10,000), resulting in a distribution of outputs that can be graphed on the cost-effectiveness plane, and analysed. A key output of a PSA is the proportion of results that fall favourably (i.e. considered cost-effective) in relation to a given cost-effectiveness threshold. This may be represented using a cost-effectiveness acceptability curve.

 

How to cite: Probabilistic/Stochastic Sensitivity Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/probabilistic-stochastic-sensitivity-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Psychometric properties refer to the validity and reliability of the measurement tool. Before being able to state that a questionnaire has excellent psychometric properties, meaning a scale is both reliable and valid, it must be evaluated extensively. Reliability is the degree to which a measure is free from measurement error and includes internal consistency reliability and test-retest reliability. Validity is the degree to which an instrument measures the outcome concepts it purports to measure and includes content validity, construct validity, criterion validity and responsiveness.

How to cite: Psychometric Properties [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/psychometric-properties/

 

Contact us today if you would like to be kept updated with our latest training courses:

Q

The term ‘qualitative review’ may be used either to refer to a review where the source studies reviewed report only qualitative data or to refer to a review the results of which are reported in a qualitative fashion. In the latter case the source studies may have reported quantitatively. Care should be taken to define exactly what is meant by a qualitative review, and to justify why a quantitative review was not appropriate/possible.

 

How to cite: Qualitative Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/qualitative-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

Quality of life is a broad, multidimensional concept of an individual’s subjective evaluation of aspects of his/her life as diverse as physical, social, spiritual and emotional well-being, as well as possibly touching on others areas such as his/her environment, employment, education and leisure time. Within this wide-ranging definition health-related quality of life (HRQOL) is used to refer to the impact a medical condition and/or treatment has on a patient’s functioning and well-being. HRQOL is increasingly being measured in clinical trials alongside other outcome measures to evaluate the full range of  effects of an intervention (e.g. a new medicine) from the patients’ perspective. For instance, in oncology trials symptom burden may be measured in addition to survival and progression-free survival.

 

How to cite: Quality of Life [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/quality-of-life/

 

Contact us today if you would like to be kept updated with our latest training courses:

The quality-adjusted life year is a summary outcome measure used to quantify the effectiveness of a particular intervention. Since the benefits of different interventions are multi-dimensional, QALYs have been designed to combine the impact of gains in quality of life and in quantity of life (i.e. life expectancy) associated with an intervention. In this case it is the incremental (i.e. differences between two or more alternatives) QALYs, compared with the incremental costs, that provides the measure of economic value. If a wide range of aspects (domains) of quality of life is included in the quality component, the resulting QALYs should be comparable across disease areas, which is valuable when considering broad-based resource allocation decision-making. More specifically, QALYs are based on utilities, which are valuations of health-related quality of life measured on a scale where full health is valued as 1 and death as 0. These valuations are the multiplied by the duration of time (in years) that a subject spends in a health state with that particular utility score, and aggregate QALYs are then summed over the subject’s projected lifetime (or other time period corresponding to the time horizon of the analysis). For example, if someone experiences a health state with a utility of 0.8 for 10 years and then a health state with a utility of 0.5 for 5 years (and then dies), his/her aggregate QALYs will be (0.8×10) + (0.5×5) = 10.5 QALYs. QALYs are recommended by NICE as its preferred measure of health outcome for use in technology appraisals.

 

How to cite: Quality-Adjusted Life Year (QALY) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/quality-adjusted-life-year-qaly/

 

Contact us today if you would like to be kept updated with our latest training courses:

R

A randomised controlled trial (RCT) is an experiment designed by investigators to study the efficacy and safety of at least two interventions in groups of randomly assigned subjects. The main value of randomisation is that the study groups are designed to be as near identical as possible in measured as well as unmeasured characteristics that may impact on the study outcomes. Where possible (more likely when medicines are compared) patients, investigators and assessors are ‘blinded’ as to which subjects are receiving which intervention. Patients (study subjects) are recruited using pre-specified inclusion and exclusion criteria, and following a review of the study protocol and agreement for the study to proceed by an independent ethical committee. The study protocol describes all aspects of the study, including the trial duration and schedule of visits: usually multiple follow-ups to measure the outcomes of interest. When the follow-up schedule is completed for the final study subject, the ‘blind’ is broken, and analysis comparing the study groups is undertaken, using statistical methods pre-specified in an analysis plan, with a pre-specified criterion for success. Success usually consists of rejection of a ‘null’ hypothesis – i.e. no difference between treatment groups in the outcome measure. The null hypothesis together with the expected population variation in the outcome measure is used to power the trial and calculate the required sample size. Study groups should be analysed according to ‘intention to treat’, in which subjects are allocated to the group to which they were randomised (whatever happens next). ‘On treatment’ analyses, which take into account any withdrawals or cross-overs between treatment groups during the trial, may also be useful. During the trial an independent panel reviews the blinded data to identify unexpectedly large (or small) differences in efficacy or safety between groups that may require the trial to be terminated early on ethical grounds. RCTs are considered very important evidence in the development of any medical intervention, and their results are frequently identified in systematic reviews and meta-analyses and are used in health economic modelling. An important distinction is made between ‘explanatory’ trials, which use very tight protocols and intense follow-up to measure safety and efficacy with the minimal possible opportunity for bias, and pragmatic trials in which the aim is to mimic usual practice with broader inclusion criteria and fewer protocol-related activities (and so generate results which may be considered more generalizable to routine practice).

 

How to cite: Randomised Controlled Trial [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/randomised-controlled-trial/

 

Contact us today if you would like to be kept updated with our latest training courses:

Rapid reviews can provide quick summaries of what is already known about a topic or intervention. Rapid reviews use systematic review methods to search and evaluate the literature, but the extensiveness of the search and other review stages may be limited.

 

How to cite: Rapid Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/rapid-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

A rating scale is a means of quantifying responses to items or questions in a test, survey or questionnaire using a set of categories. These categories may take a number of formats, such as an ordered series of numbers, e.g. 1, 2, 3, 4; a series of descriptions: “strongly disagree”, “disagree”, “agree”, “strongly agree” or a combination of the two. It is also possible to have a rating scale consisting of a series of numbers with two qualitative descriptors (known as “anchors”), one at each extreme. Where qualitative descriptors only are used these will be assigned a numerical value for scoring purposes. There are no limits to how many categories can be included in a rating scale, but for practical purposes scales with 4, 5, 7 and 10 categories are commonly used. Responses to rating scales for a number of test items may be summed to produce sub-total (“domain”) or overall scores. In the latter case this is known as a Likert scale. The majority of rating scales are ordinal: although the categories increase monotonically, the distances between each category cannot be assumed to be equal. For example, if “disagree” is assigned the value 2, “agree” a 3 and “strongly agree” a 4, the difference between “strongly agree” and “agree” cannot be assumed to be the same as the difference between “agree” and “disagree”.

 

How to cite: Rating Scale [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/rating-scale/

 

Contact us today if you would like to be kept updated with our latest training courses:

A ‘reference case’ is used by some health technology assessment bodies to summarise their guidance to those making and assessing submissions for reimbursement. The reference case gives a formal statement of accepted methods and assumptions underpinning analyses to which submissions should conform. The purpose is to ensure a level of consistency between submissions and assessments of evidence. The UK NICE reference case (for technology appraisals) includes statements on defining the decision problem, choice of comparator, perspective, time horizon and discounting, sources of data, preferred type of economic evaluation. Non-reference case methods and analyses are usually permitted, but a justification for the deviation from the reference case is required.

 

How to cite: Reference Case [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/reference-case/

 

Contact us today if you would like to be kept updated with our latest training courses:

Relative risk (RR), or risk ratio, is an estimate the magnitude of an association between an exposure and a disease, giving the likelihood of developing the disease on the exposed group compared with the unexposed. This is calculated as the ratio of the cumulative incidence of the disease in each group. In clinical trials this corresponds to the probability of developing the outcome of interest in the treatment group compared with (divided by) the equivalent probability in the comparison group.  A relative risk of 1 means that there is no difference in risk between the groups compared, <1 (>1) means that the risk is lower (higher) in the treatment group. For relatively rare events relative risks are similar in magnitude to odds ratios, which have statistical properties that favour their use in meta-analyses. However risk models (that may form the basis of more complex economic models) often built on sets of relative risks associated with different risk factors for outcomes of interest.

 

How to cite: Relative Risk [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/relative-risk/

 

Contact us today if you would like to be kept updated with our latest training courses:

The relative reduction in risk (RRR) for an intervention, usually expressed as a percentage reduction from the risk of the comparison intervention, is commonly used as the primary result of clinical trials. The anticipated reduction in the primary outcome measure for the trial will be used to calculate the study sample size, in order to achieve a RRR that rejects the null hypothesis of no difference (RRR=0) at a required level of significance. The RRR is calculated as the difference in risks between the two interventions, divided by the risk of the comparison therapy (x 100 to convert it into a %). In modelled economic evaluations RRRs are frequently used as the measure of effect (and the confidence intervals for RRR provide a distribution for this measure, usually assumed to be Normal), however the RRRs need to be associated with absolute levels of risk in order for incremental values (health outcome or cost) to be calculated.

 

How to cite: Relative Risk Reduction [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/relative-risk-reduction/

 

Contact us today if you would like to be kept updated with our latest training courses:

Measures such as patient reported outcome measures (PROM), used within clinical trials or service evaluation, should be reliable, valid and sensitive to detect change. These are key psychometric requirements of such tools. There are two types of reliability. The first, internal consistency, refers to how people respond to individual items designed to measure the same underlying construct. Patient reported outcome measures often consist of multiple items which are designed to measure the same underlying construct. As such, people’s responses to these items should be consistent. When this is the case, the measure can be said to have internal consistency. This is tested through Cronbach’s alpha. Cronbach’s alpha is a coefficient of reliability or consistency and is a function of the number of test items and the average inter-correlation among the items. The second form of reliability is test-retest reliability. If a measure is administered twice, a short space of time apart, it would be expected that a person’s score remains the same. The strength of correlation between the two scores is how we assess test-retest reliability. Measures need to be reliable, producing consistent scores.

 

How to cite: Reliability [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/reliability/

 

Contact us today if you would like to be kept updated with our latest training courses:

Reliability generalisation is a type of meta-analysis. However, rather than deriving an estimate of an overall effect size for an intervention it provides an estimate of the internal consistency of a questionnaire, test or survey. Along with validity and responsiveness to change, internal consistency (or internal reliability) is one of the important psychometric properties of a test, and may be thought of as how well the individual items or questions included in the instrument correlate with each other. This degree of correlation can, for instance, be expressed as a Cronbach’s alpha statistic. Reliability generalisation may be used to produce an estimate of internal consistency of a test from Cronbach’s alpha statistics pooled from different studies.

 

How to cite: Reliability Generalisation [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/reliability-generalisation/

 

Contact us today if you would like to be kept updated with our latest training courses:

Reporting bias is the tendency for authors to selectively use information or outcomes from investigations or trials, based on certain characteristics. The dissemination of research findings may therefore be influenced by the nature and direction of the results. Different types of bias due to the nature and direction of results include: ‘publication bias’ – likelihood of publication of research; ‘time lag bias’ – delaying or expediting publication of research, ‘citation bias’ – citing or not citing research, and ‘outcome reporting bias’ – selective reporting of outcomes. Recently there have been initiatives such clinicaltrials.gov to minimise this type of bias, by ensuring that clinical trials (especially those of new medicines, for which marketing authorisation will be sought) are entered prospectively on registers so that their status and results are available, especially for systematic reviews of efficacy supporting health technology assessment.

 

How to cite: Reporting Bias [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/reporting-bias/

 

Contact us today if you would like to be kept updated with our latest training courses:

A synthesis of research data is the summary of the outcomes of interest for a particular research question, taken from two or more previously reported studies, in order to obtain a comprehensive (and unbiased) view of the available evidence, and frequently to make a judgement as to the quality and relevance to a current research question or decision problem. It enables researchers to create generalisations based on integrated empirical research. Depending on the type of data and amount of resources available, a research synthesis may be undertaken in a number of ways, e.g. a narrative summary or a formal meta-analysis.

 

How to cite: Research Synthesis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/research-synthesis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Resource use refers to the use of healthcare staff time, facilities, or consumables (especially medicines). In clinical trials and other studies health care costs are frequently estimated by counting items of resource that are used by study subjects, and associating these with relevant costs for each resource unit for the country of interest. In see for service systems such as US charges may be recorded, which can be converted to costs using cost-to-charge ratios. Commonly recorded items of resource use are GP or specialist visits, hospital admissions (often classified by diagnostic related group) or hospital bed days (often classified by intensity of ward care), medications administered (by dosage, frequency, route of administration). These may be recorded as one-off events or aggregated over a time-period (month or year) for use in economic modelling. Resource use and unit cost data are available from many sources, and judgements may be needed as to which data are most suitable for use in the particular context. This is usually based on assessment of data quality and the similarity of the source (study) to the situation to which the resource use and unit costs will be applied. Resource use data collected in clinical trials may be of limited value for economic evaluations because of the inclusion of protocol-related elements (study visits, care in specialist centres), which may not be representative of routine care. Also trials are frequently performed in a variety of countries where care patterns and resource definitions may differ.

 

How to cite: Resource Use [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/resource-use/

 

Contact us today if you would like to be kept updated with our latest training courses:

Responsiveness (also known as ‘sensitivity to change’) concerns the instrument’s ability to distinguish clinically important changes because of an intervention. For an instrument to be responsive it must have the capacity to detect small but clinically significant changes over time. A responsive instrument will demonstrate scores that increase if the patient improves, decrease if the patient worsens or will not change if the patient’s state remains stable. The opposite may be true depending on the scoring system used. For instance, a higher score on the PHQ-9 reflects more severe disease. Responsiveness is particularly relevant for disease-specific measures. The most common methods of estimating an instrument’s responsiveness include: (1) Effect size = (M2 – M1) / SD1 where M1 = Mean at T1, M2 = Mean at T2 and SD1 = Standard deviation at T1. The effect size (d) relates change over time to the standard deviation of baseline scores (and, as a result, is largely dependent on the variability of scores at baseline).d is defined in terms of magnitude of change (small = 0.2; moderate = 0.5; large = 0.8).(2) Standardised response mean (SRM) = (M2 – M1) / SDdiff where SDdiff = Standard deviation of score changes.

How to cite: Responsiveness [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/responsiveness/

 

Contact us today if you would like to be kept updated with our latest training courses:

Return on investment is a performance measure used to evaluate the efficiency of a project or to compare the efficiency of a number of different projects. ROI measures the amount of return on a project relative to its cost. To calculate ROI, the return (net benefit: benefit minus cost) of a project is divided by the cost of the project, with the result expressed as a percentage. The requirement of benefit to be expressed in money terms is the main reason that ROI (or cost-benefit analysis) is not used extensively in health technology assessment, however it is used in other contexts such as planning large capital investments (new hospitals, units or services).

 

How to cite: Return on Investment [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/return-on-investment/

 

Contact us today if you would like to be kept updated with our latest training courses:

A review of reviews is a method of collating evidence from multiple systematic reviews, making use of research that has already been conducted. This is particularly useful when there is a large volume of primary studies relevant to the review question

 

How to cite: Review of Reviews [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/review-of-reviews/

 

Contact us today if you would like to be kept updated with our latest training courses:

Assessing the risk of bias in the primary studies acknowledges the varying quality of included studies, and is used to guide the interpretation of findings and determine the extent to which inferences can be made from the results.  The impact of including low, medium and high quality studies could be assessed in an investigation of heterogeneity. Quality assessment tools are numerous and specific to the type of study design. Some agencies will specify the tool to be used in the systematic reviews they commission/review, and other agencies will be open to discussion about the quality assessment tool to be used.

 

How to cite: Risk of Bias [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/risk-of-bias/

 

Contact us today if you would like to be kept updated with our latest training courses:

S

Scientific advice is the term given to formal consultations with regulator agencies such as MHRA or health technology assessment agencies such as NICE, in advance of making submissions for marketing authorisation or reimbursement. Most frequently the consultations concern the design of confirmatory (Phase 3) studies – the objective being to understand if the proposed design and analyses are realistic and will deliver data that are useful for decision making. More recently an increasing number of consultations have been earlier (about whole programmes of studies for a new medicine) or later (about peri- and post-launch study designs) in product development. There are a variety of options for undertaking Scientific Advice from single organisation (regulatory of HTA) to parallel- single country (e.g. MHRA and NICE) to multi-stakeholder parallel advice (e.g. EMA and many HTAs). The status of the advice (whether it is mandatory or formally considered at subsequent submissions) varies by organisation.

 

How to cite: Scientific Advice [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/scientific-advice/

 

Contact us today if you would like to be kept updated with our latest training courses:

A search filter is a ready-made search strategy designed to limit search results to a set of references with specific characteristics. Filters are usually combined with a topic by using ‘AND’, in order to restrict the search to a smaller, more relevant set of results. Other Boolean operators ‘OR’ and ‘NOT’ may be used. For example a randomized controlled trials filter should retrieve only those studies that are RCTs. Several versions of a filter may exist depending on how exhaustive or precise they aim to be. Well-designed filters will retrieve all relevant studies while reducing the amount of literature that needs to be screened by reviewers.

 

How to cite: Search Filter [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/search-filter/

 

Contact us today if you would like to be kept updated with our latest training courses:

A search strategy is a query used to retrieve information, usually from a bibliographic database. It can refer to the query used in one database, or to the general approach that is adapted for use in a number of different sources. The latter is described in the methodology sections of scientific papers. To aid the description all the proposed sources are listed. The complete database searches (or sample) can often be found as an appendix item. Search strategies vary in terms of their complexity, the range of sources used and publication types they aim to retrieve. Additional retrieval methods should be documented, including hand-searching specific publications, citation searching, and seeking expert advice.

 

How to cite: Search Strategy [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/search-strategy/

 

Contact us today if you would like to be kept updated with our latest training courses:

The sensitivity of a diagnostic (or screening) test indicates how often the test will give a positive result when the individual being tested indeed has the condition of interest. It is computed as the ratio of true positives (with disease, test positive) to the sum of true positives and false negatives (with disease, test negative) and is usually expressed as a percentage. Together with specificity, sensitivity is a central component of diagnostic accuracy. If there a choice of cut-off value for a diagnostic test (i.e. to give a ‘positive’ result) then it is likely that sensitivity and specificity will need to be ‘traded off’ to obtain the optimum cut-off value, depending on the seriousness of consequences for those diagnosed incorrectly with or without the condition. A very high sensitivity value may only be obtained by reducing specificity – having a larger number of false positives (those without the disease but test positive and require further investigation or treatment without being able to benefit). Sensitivity (and specificity) is also applied to the ability of literature search strategies to identify all relevant (sensitivity) and rule out irrelevant (specificity) research reports.

 

How to cite: Sensitivity (Diagnostic) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/sensitivity-diagnostic/

 

Contact us today if you would like to be kept updated with our latest training courses:

Sensitivity analysis is used to illustrate and assess the level of confidence that may be associated with the conclusion of an economic evaluation. It is performed by varying key assumptions made in the evaluation (individually or severally) and recording the impact on the result (output) of the evaluation. In model-based economic evaluations this includes varying the values of key input parameters, as well as structural assumptions concerning how the parameters are combined in the model. Sensitivity analysis  may take a number of forms: ‘one-way’ where input parameters are varied one by one, ‘multi-way’ where more than one parameter is varied at the same time, ‘threshold’ analysis where the model is used to assess the tipping point for an input parameter (at what value of this parameter would the decision based on the output of the evaluation be altered?) and probabilistic (a stochastic approach is taken to produce a distribution of outputs based on distributions of input parameters). Sensitivity analysis is an important part of the evaluation process and gives valuable information to decision-makers about the robustness of their decision based on the findings of an economic evaluation, as well as the potential value of collecting more information before making a decision.

 

How to cite: Sensitivity Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/sensitivity-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Significance level is the probability that the results observed in a study (or more extreme results) could have occurred by chance alone. It is closely associated with Type I error (see Type I and Type II errors): incorrectly rejecting the null hypothesis (false positive result). Based on the distribution of the test statistic used, a p value corresponding to the study results is estimated and compared with the pre-specified significance level, to determine whether or not to reject the study’s null hypothesis. Usually a 2-sided test is required, except in the case of non-inferiority studies. A threshold level of 5% (a = 0.05) for statistical significance is commonly used, which corresponds to a 1 in 20 chance of a positive result occurring by chance if there is no true underlying difference in the quantity of interest in the population. This level is based on convention rather than statistical theory. If many separate comparisons are being made in the analysis of a study, it is more likely that a significant result for one of these individual comparisons can occur by chance. In this case a tighter significance threshold (e.g. a = 0.01) may be used or adjustments such as Bonferroni correction may be made to p values associated with achieving the conventional threshold for each comparison.

How to cite: Significance Level [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/significance-level/

 

Contact us today if you would like to be kept updated with our latest training courses:

A single technology appraisal (STA) undertaken by UK NICE is one where a (new) technology is compared with standard of care for the indication of interest.  It is possible that more than one comparator intervention is considered, in which case the new technology is assessed against each relevant comparator in turn (but the comparators are not assessed against each other).  It is also possible that the indication is broken down to distinct sub-populations if the new technology may be used at different places in the care pathway and/or where comparator interventions differ for these sub-populations.  For new pharmaceuticals at the time of marketing authorisation STAs are the most common form of appraisal. Components of an STA process include scoping, submission, evidence assessment, (usually two) appraisal meetings, (final) appraisal determination and communication, with provision for an appeal should that be requested by the technology sponsor.  Assessment of evidence submissions is undertaken by an independent academic group (Evidence Review Group – ERG) reviewing a formal submission made by the technology sponsor, and views are sought from independent clinical experts and patients’ representatives.  The process from scoping to final determination typically takes 48 weeks. Detailed guidance on the STA process (2018 process guide revision) is available at: https://www.nice.org.uk/process/pmg19/resources/guide-to-the-processes-of-technology-appraisal-pdf-72286663351237

 

How to cite: Single Technology Appraisal (UK NICE) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/single-technology-appraisal/

 

Contact us today if you would like to be kept updated with our latest training courses:

Social return on investment is a performance measure similar to ‘Return on investment’, but which takes a broader societal perspective to valuing costs and benefits. Social and environmental factors are considered, in addition to economic variables to estimate benefits and costs. The formula used is the same as for return on investment, being benefit minus costs divided by costs, with the results expressed as a percentage. All benefits and costs must be expressed in monetary units.

 

How to cite: Social Return on Investment [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/social-return-on-investment/

 

Contact us today if you would like to be kept updated with our latest training courses:

In the evaluation of diagnostic (or screening) tests specificity refers to the proportion of population without the condition who are correctly diagnosed as being without the condition. It is computed as the ratio of the true negatives to the (true negatives + false positives), and is usually expressed as a percentage. A highly specific diagnostic test is one that identifies very few healthy patients (incorrectly) as having the condition of interest (and needing further follow-up or treatment, without benefit). The term can be applied to search strategies.

 

How to cite: Specificity [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/specificity/

 

Contact us today if you would like to be kept updated with our latest training courses:

The standard gamble (SG) method is often regarded as the most appropriate for the elicitation of utility, at least when risk is involved in decisions.  This is because it follows the axioms of von Neumann & Morgenstern’s (1944) expected utility theory, which is commonly thought of as being the appropriate normative model for decision making under uncertainty.  The SG method usually involves asking a person to choose between a certain option (where the person remains in Health State A for ‘X’ years), and a risky option (where the person either lives in full health for ‘X’ years or dies immediately).  The probability of immediate death is altered until the point of indifference is identified (i.e. where the respondent values both options equally).  If the point of indifference is identified when the probability of survival is, for example 75%, then this would imply that the individual values Health State A as 75% of full health. Both options would then give the same expected value of QALYs, for example 7.5 QALYs if ‘X’ was 10 years, and the utility of Health State A would be 0.75.  The SG method has limitations in that it relies on the respondent being able to interpret complex probabilities, and also assumes that there are no biases towards ‘certain’ outcomes.

For an interactive demonstration of this method, please click here

 

How to cite: Standard Gamble [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/standard-gamble/

 

Contact us today if you would like to be kept updated with our latest training courses:

 

In economic evaluation of healthcare interventions stochastic uncertainty is one of a number of sources of uncertainty (others are parameter uncertainty, heterogeneity and structural uncertainty), and refers specifically to random variation in outcomes between identical patients. This is distinct from heterogeneity, which refers to variation between patients attributable to variations in the observed characteristics of those patients. Cohort state transition models (see Markov models) generally do not consider stochastic uncertainty (in their base case or sensitivity analyses): individual-level micro-simulation or discrete event simulation models will be needed if this is important.

How to cite: Stochastic Modelling [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/stochastic-modelling/

 

Contact us today if you would like to be kept updated with our latest training courses:

The term ‘Structured review’ is occasionally used as an alternative term for ‘systematic review’. It may also be used to refer to a review that, while highly structured in its approach, is not as rigorous as a full systematic review.

 

How to cite: Structured Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/structured-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

Survival analysis is an analytical method focusing on time-to-event data. Frequently the event is death (overall survival), but many other events can be considered in this way, such as disease progression/relapse, or event occurrence (for prevention). Survival analysis is typically used in oncology, where patient survival (death from any cause) and time-to-progression are often key endpoints of a clinical trial: the analysis frequently forms the basis of associated economic evaluations using (partitioned) survival models. The attraction of survival analysis for economic evaluation is that economic endpoints such as (gains in) life-years and quality-adjusted life years are represented as areas under the (quality-adjusted) survival curve. Health outcomes are considered longitudinally over time, and not cross-sectionally at a specific point in time. The Kaplan-Meier method provides a non-parametric representation of survival over the time period that data was collected, allowing for incomplete patient records where patients are lost to follow up (censoring). Parametric representations of survival, defined using a number of different statistical distributions, such as Weibull, Gompertz or exponential, allow for survival to be extrapolated beyond measured patient experience, important for economic modelling.

 

How to cite: Survival Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/survival-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Systematic reviews adopt a rigorous scientific approach to identify and synthesize all the available evidence pertaining to a specific research question. They are carried out according to pre-defined protocols, which set out the scope of the systematic review, details of the methodology to be employed and reporting of findings. The objective is to ensure a comprehensive and repeatable search together with an unbiased assessment and presentation of the relevant evidence. Key components of a systematic review include: systematic and extensive searches to identify all the relevant published and unpublished literature; study selection according to pre-defined eligibility criteria; assessment of the quality and risk of bias in included studies; presentation of the findings in an independent and impartial manner and a discussion of the limitations of the evidence and of the review.

 

How to cite: Systematic Review [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/systematic-review/

 

Contact us today if you would like to be kept updated with our latest training courses:

T

Test-retest reliability is a measure of the reproducibility of the scale, that is, the ability to provide consistent scores over time in a stable population. In an experiment with multiple time points, the expectation is that the measurement tool chosen could consistently reproduce the same result providing all other variables remain the same. Tools which do provide such consistency are regarded as having high test re-test reliability, and therefore appropriate for use in longitudinal research. The reliability scores can be calculated by understanding the magnitude of the relationship between test statistics thus, the correlation coefficient (r) is calculated. A measure providing the same output at every time point would deliver a perfect linear correlation of r = 1.

How to cite: Test-Retest Reliability [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/test-retest-reliability/

 

Contact us today if you would like to be kept updated with our latest training courses:

The time horizon used for an economic evaluation is the duration over which health outcomes and costs are calculated. The choice of time horizon is an important decision for economic modelling, and depends on the nature of the disease and intervention under consideration and the purpose of the analysis. Longer time horizons are applicable to chronic conditions associated with on-going medical management, rather than a cure. A shorter time horizon may be appropriate for some acute conditions, for which long-term consequences are less important. The same time horizon should be used for both costs and health outcomes. A lifetime horizon is preferred by UK NICE, although it may be useful in sensitivity analysis to test out intermediate time-horizons of 5 to 10 years, for which there may be more robust data. Use of a long-term time horizon is likely to involve extrapolating cohort experience into the future and making assumptions about continued efficacy of interventions. In modelling terms, this may require projection forward of current health states and costs of care estimating transitions between health states and associated health outcomes and costs at time points over the period, as well as discounting of future costs and health outcomes.

 

How to cite: Time Horizon [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/time-horizon/

 

Contact us today if you would like to be kept updated with our latest training courses:

The time trade-off (TTO) is a choice-based method of eliciting health state utility, which reflects the length of remaining life expectancy that a person may be prepared to trade-off in order to avoid remaining in a sub-perfect health state.  The TTO method usually involves asking the respondent to consider remaining in a specified health state (‘Health State A’) for the next 10 years, and then to die without pain.  The respondent is then asked how many years he or she would need to live in full health (and then dying without pain) to make this option exactly as desirable as being in Health State A for 10 years.  Typically, the answer will be a value less than 10 years.  If, for example, the respondent answers that 8 years in full health would be required, then it is inferred that the being in Health State A has 80% of the utility of being in full health (i.e. it has a utility value of 0.8), since both options would then provide exactly 8 QALYs.  A limitation of this method is the fact that people do not always value health in future years the same as health at the current time.  In order to allow for this, discount rates are applied to QALYs in economic evaluations to adjust the value of future health benefits.  However, because it is impossible to calculate the exact rate at which each individual will discount, the adjustments cannot be relied upon to be an exact reflection of their true preferences.

For an interactive demonstration of this method, please click here

 

How to cite: Time Trade-Off [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/time-trade-off/

 

Contact us today if you would like to be kept updated with our latest training courses:

In economic evaluations tornado diagrams are used to present the result of multiple univariate sensitivity analyses on a single graph. Each analysis is summarised using a horizontal bar which represents the variation in the model output (usually an ICER) around a central value (corresponding to the base case analysis) as the relevant parameter is varied between two plausible but extreme values. Typically the horizontal bars are ordered so that with those with the greatest spread (i.e. parameters to which the model output is most sensitive) come at the top of the diagram, and those with the lowest spread at the bottom. The resulting diagram of stacked horizontal bars has a distinctive tornado shape. Tornado diagrams are used to help the reviewer assess which of the model’s parameters have the greatest influence on its results.

 

How to cite: Tornado Diagram [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/tornado-diagram/

 

Contact us today if you would like to be kept updated with our latest training courses:

Two way sensitivity analysis is a technique used in economic evaluation to assess the robustness of the overall result (typically of a model-based analysis) when simultaneously varying the values of two key input variables (parameters).This is particularly useful when there is a correlation between the two variables that are tested, in which case varying them independently in univariate sensitivity analyses may give a misleading view. Examples of such correlated input parameters might be hazard ratios for progression-free survival and overall survival for cancer therapy, or utility values for moderate and severe disease states.

 

How to cite: Two Way Sensitivity Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/two-way-sensitivity-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

When statistically testing the results of a comparative study two types of error can be made. A Type I error occurs when the null hypothesis (see hypothesis testing) is rejected although it is true (i.e. there is no difference between treatment groups). A Type II error occurs when the null hypothesis fails to be rejected by the statistical test although it is false (i.e. there is indeed a difference between treatment groups). These two concepts are linked closely to significance level (Type I) and study power (Type II). Type I (false positive) error is closely linked to significance level (a): setting a high threshold (low a) means that it is less likely that a significant result, rejecting the null hypothesis of no difference between the groups, will occur when there actually is no difference. By contrast Type II (false negative) error is closely linked to power (1–b): setting a high threshold (low b) means that it is less likely that he null hypothesis (no difference) fails to be rejected when there actually is a difference between the groups. With stochastic data it is generally not possible to eliminate both Type I and Type II error, and frequently a trade-off needs to be made between the two.

How to cite: Type I and Type II Errors [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/type-i-and-type-ii-errors/

 

Contact us today if you would like to be kept updated with our latest training courses:

U

Costs used in economic evaluation are often calculated as a product of counts of items of resource use associated with a patient’s care and a standard ‘unit’ cost of each type of item. In many healthcare systems there are schedules of standard unit costs for different types of resource utilisation. Commonly used unit costs are for hospital in-patient care (per admission or per bed-day), intensive care (per day), emergency department outpatient, clinic or primary care (per visit), community nursing or therapy care (per hour), diagnostics (per test), medicines (per tablet or vial). The use of standard unit cost enables analysts to use the same sources for different economic evaluations, limiting one potential source of variability: if different costs are calculated for different interventions, this will be a result of differences in resources used rather than different valuations of each type of resource. Care should be taken when comparing unit costs across countries as the definitions of resource items may vary.

 

How to cite: Unit Costs [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/unit-costs/

 

Contact us today if you would like to be kept updated with our latest training courses:

Univariate/one way sensitivity analysis allows a reviewer to assess the impact that changes in a certain input (parameter) will have on the output results of an economic evaluation (most frequently those based on a model) – this may be referred to as assessing the robustness of the result to that parameter. The parameter of interest should be varied between plausible extremes, preferable justified by review of available evidence. This is the simplest form of sensitivity analysis since only one parameter is changed at one time, and correlations between parameters is not taken into account. Tornado diagrams are often used to summarise univariate sensitivity analyses testing a set of input variables in turn.

 

How to cite: Univariate/One Way Sensitivity Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/univariate-one-way-sensitivity-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

In budget impact analysis, it is usual to compare a hypothetical future scenario (i.e. the approval of a new technology) against the counterfactual (i.e. no approval of the technology).  Because it is rare for a new technology to be provided universally to all eligible patients, budget impact models usually make an assumption about the ‘uptake’ of the technology.  This is usually characterised as the proportion of patients that would receive the technology in each consecutive year after its introduction.  Often, the uptake may start slowly (with a low % of patients) and gradually increase over time.  Careful consideration should be given to whether the new technology will only to new (incident) patients, to existing (prevalent) patients or to both.

How to cite: Uptake [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/uptake/

 

Contact us today if you would like to be kept updated with our latest training courses:

In economic evaluation of healthcare interventions utilities (also called health state preference values) are used to represent the strength of individuals’ preferences for different health states. When utility values are averaged over a population of responders they can be considered to be valuations of health states. Conventionally the valuations fall between 0 and 1, with 1 representing the valuation of a state of perfect health and 0 representing the valuation of death (non-existence). In some scoring systems a negative utility value is also possible, which indicates that a (very poor) health state is valued as less preferable than death. Sequences of utility values reported over periods of time for individual patients or cohorts of patients may be aggregated to derive quality-adjusted life years, commonly used as outcomes in economic evaluation. Several methods are used to obtain health state preference values (utilities). Direct methods involve individuals being asked to describe and assess health states and place weights on them, using techniques such as Standard Gamble or Time Trade-off. Indirect methods involve the use of generic multi-attribute scoring systems to classify health states according to a number of distinct domains. Utility tariffs for health states described in this way are derived from population surveys. Study subjects are asked to describe their health status at different time points using these systems, and their responses are converted to utilities by using the appropriate tariff. Generic multi-attribute scoring systems are preferred to disease-specific ones as they cover general aspects of health and can facilitate comparisons across different disease areas. The most commonly used multi-attribute utility instrument is EQ-5D (preferred by UK NICE), which has domains of mobility, self-care, usual activities, pain/discomfort and anxiety/depression.

 

How to cite: Utility [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/utility/

 

Contact us today if you would like to be kept updated with our latest training courses:

V

Measures such as patient reported outcome measures (PROM), used within clinical trials or service evaluation, should be reliable, valid and sensitive to detect change. These are key psychometric requirements of such tools. Validity is an overarching term referring to whether an instrument measures what is claims to measure. PROMs are used to measure a construct that is not directly observable, such as quality of life or pain. Validity is important, therefore, since in research we rely on the instrument to measure quality of life or pain, rather than something else. There are different types of validity. The main types are: face validity, content validity, construct validity and criterion validity. Face validity refers to whether a measure is perceived by respondents, for example, patients within a clinical trial, to measure what it says it measures. Low face validity may result in a lower response rate as the instrument may be perceived to lack credibility, for example. Content validity is similar but refers to how well the items within the measure cover the construct of interest (for example, do they cover all of the domains proposed to underlie quality of life). Construct validity refers to the extent to which items within the measure perform as expected (e.g., whether people respond similarly to items that are designed to measure the same or related constructs). Finally, criterion validity refers to the extent to which scores derived from the measure correlate with other outcomes, in the direction anticipated.

 

How to cite: Validity [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/validity/

 

Contact us today if you would like to be kept updated with our latest training courses:

Value based pricing (also called value optimized pricing) is a pricing strategy which sets prices primarily according to the perceived or estimated value of a product or service to customers rather than according to the cost of developing the product or its historical price.

In healthcare industry decision-makers are calling for manufacturers to tie the prices of drugs to their actual value to patients, a process which become easier with the growth of targeted or individualized therapies.  In the UK around 2010 a VBP approach was proposed for the NHS, which in addition to effectiveness and cost-effectiveness would incorporate a wider set of factors such as the burden of the illness in society, level of unmet need, how innovative the drug is and the wider social benefits it offers, as a basis of direct negotiation of drug prices.  However agreement was not reached on how to operationalise VBP and it has been set aside in favour of the somewhat broader ‘value-based healthcare’.  More recently there has been renewed interest in ‘indication-based pricing’, which recognises that value may vary according to different uses of a drug (for example in different oncology populations), but this has not yet been implemented in healthcare systems.

 

How to cite: Value Based Pricing [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/value-based-pricing/

 

Contact us today if you would like to be kept updated with our latest training courses:

When used in health technology assessment, value of information (VOI) analysis is an umbrella term referring to the estimation of the value, in terms of cost and health outcomes, of collecting more data/information on key parameters influencing a decision, for example reimbursement of a new technology.  Typically this is most useful where the output of an economic evaluation (e.g. incremental cost-effectiveness ratio (ICER)) is uncertain, yet close to a decision threshold (willingness to pay), and a key parameter on which the output is based is itself uncertain.  As such, new information reducing uncertainty in that parameter will increase the chance of the correct decision being made, and the ‘value’ of this information is a function of how likely it would enable a decision to be made or changed.  The usefulness of collecting information on parameters that have high certainty, or those which have a small bearing on the output of the economic evaluation, is likely to be low.  VOI analysis may be particularly useful in assessing whether a further, expensive, study is likely to yield helpful results (e.g. of effectiveness).  VOI analysis may be performed using conventional economic models as long as they probabilistic sensitivity analysis [link] included. Common outputs of VOI analysis are the Expected Value of Perfect Information (EVPI), Expected Value of Partially Perfect Information (EVPPI) and Expected Value of Sample Information (EVSI).

 

How to cite: Value of Information Analysis [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/value-of-information-analysis/

 

Contact us today if you would like to be kept updated with our latest training courses:

Value-based healthcare is a healthcare delivery model in which providers are paid based on patient health outcomes. Under value-based care agreements, providers are rewarded for helping patients improve their health, reduce the effects and incidence of chronic disease, and live healthier lives in an evidence-based way. Value-based care differs from a fee-for-service or capitated approach, in which providers are paid based on the amount of healthcare services they deliver. The “value” in value-based healthcare is derived from measuring health outcomes against the cost of delivering the outcomes and thus the primary focus is to improve outcomes that are delivered, and then deliver superior outcomes at a reduced cost.

How to cite: Value Based Healthcare [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/value-based-healthcare/

 

Contact us today if you would like to be kept updated with our latest training courses:

 

W

Willingness-to-pay (WTP) is the valuation of health benefit in monetary terms, often so that this can be used in a cost-benefit analysis. The term WTP may also refer to survey techniques used to derive WTP valuations. However in many health care system patients do not make direct purchasing choices for much of their care, and so WTP are not frequently used except in certain situations (e.g. waiting times, access arrangements to care, types of care usually obtained through direct payment) where monetary valuations may be more straightforward to make. One commonly used WTP valuation method is contingent valuation, where individuals are asked to compare different hypothetical situations about the intervention under investigation. For example, they could be asked to state their WTP for IVF treatment when presented with different probabilities of success.

 

How to cite: Willingness-to-Pay [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/willingness-to-pay/

 

Contact us today if you would like to be kept updated with our latest training courses: