Allocation concealment

In a randomized controlled trial, participants are assigned to either a treatment or a control group, through a random process. This process uses a method of allocation to assign participants to each group. Allocation concealment is a way to make sure that the person conducting the trial cannot influence who goes into which group. This is done by creating an allocation sequence that is administered separately to the investigator. For instance, Researcher A wants to run a trial of a therapeutic intervention for perpetrators of domestic violence, and creates an allocation sequence based on a population of prisoners who have been incarcerated at a particular prison. Researcher B is running the trial, but is unaware of the sequence or method by which participants are assigned to each group.


Attrition is the word used to describe someone dropping out of a research study, and attrition rate is the way in which we describe the effect of this on the research. If no one drops out, there is zero attrition. If two people drop out of a trial of only ten people, we can say there is an attrition rate of 20%. This is important because the more people who drop out, the greater impact it is likely to have, not just on the overall research, but on the quality of the findings. One reason is because people who drop out are likely to have different characteristics to those who stay in, which can introduce bias. For instance, consider a study of the effectiveness of a short hospital based intervention in reducing alcohol dependency among street drinkers. Those who drop out of this study might be those with the highest levels of dependency. If enough of this group drops out, then the study group will only comprise those with low and medium levels of dependency, and is no longer representative of street drinkers as a whole. As a consequence, the results might show that the intervention is more successful than it actually is.

Back to top

Binary outcome measure

See Outcome measure


Blinding is a similar practice to allocation concealment in that it is designed to prevent those taking part in trials, or administering trials, from influencing the results. Single blinding is where the participants do not know what the trial involves; double blinding is where neither the participants nor the investigating clinician know what the trial involves. However, blinding it is more problematic allocation concealment, and is therefore less preferable, especially where it means that the clinician administering the trial does not know what the trial involves.

Comparison group

A comparison group or control group is the group of participants that share similar characteristics to those in the treatment group but do not receive the intervention. Where participants in the treatment group are selected at random, comparison groups are constructed using statistical matching techniques rather than random allocation. To use an example already described above, a hospital-based intervention is being trialled that treats alcohol addiction among street drinkers. The control group will be street drinkers selected at random from a larger group of street drinkers. Ideally, through randomized selection, they should be a bit like the wider population of street drinkers (because of probability). Once selected, the control group will be selected from the same group of people, but in such a way that they resemble the first group to some extent.


Compliance refers to the participants of a trial adhering to the instructions they are given, for example by taking the treatment at the times specified and in the amounts specified, so that everyone is receiving the same treatment in the same quantities. Any deviation from this could adversely affect the results and contaminate the research.

Back to top

Confidence interval

The confidence interval refers to the degree of confidence with which an investigator can say that the results of a trial would be the same were the trial conducted with an entire population rather than just a small sample of the population. A study can only report an estimate of the effect size for the whole population because it is obtained from a sample. The true effect (i.e. the population value) will lie somewhere between a range of values known as the confidence interval. For example, a study may report an estimated effect size of 1.68 with 95% confidence that the true effect size is likely to fall between 1.20 and 2.36. This would mean that, from a sample of 2000 people designed to look similar to that of the whole population of the UK, we can be 95% sure that were we to try out the research on the whole population, the results would be roughly similar.


Contamination occurs when members of the control group are exposed to the intervention. For instance, if in a trial of a new drug, some members of the control group are accidentally given the real pill, rather than the placebo (pretend pill) then we can say that the results have been contaminated.

Continuous outcome measure

See Outcome measure

Control group

A control group is another name given to a comparison group in a trial. It is used more frequently than the term comparison group.


The counterfactual is what would have happened in the absence of an intervention. A control group is constructed to mimic this condition so that evaluators can compare the outcomes of participants that received an intervention with a group so similar to the treatment group that it is possible to say that any differences in outcome are because they did not receive the treatment.

Dichotomous outcome measure

See Outcome measure

Effect size

The effect size is a standardized measure of the magnitude and direction of the difference in outcome between the treatment and control groups after an intervention. Effect sizes are typically expressed as using specialized statistical terms that present odds ratios for binary and dichotomous variables (e.g. reoffending or no reoffending) and difference in means (averages) for continuous (e.g. number of crimes) variables.

External validity

External validity describes how well the results of a study translate to other contexts, and whether the results can be generalized to other situations or populations. For instance, external validity refers to whether the findings of a trial conducted in India of a new restorative justice approach to violent offending can be applied to the population of Finland. In this case, there may be differences between the populations, i.e. levels of alcohol consumption in violent offenders, which mean that the trial would produce different findings if replicated in Finland.

Forest Plot

A forest plot summarizes visually the results of a meta-analysis which combines data statistically from multiple primary studies to produce a single estimate of effect. A well-known forest plot was adopted as the logo of the medical research group ‘Cochrane’. This plot combined the results of seven studies that measured the effect of giving corticosteroid injections to women about to give birth prematurely. The combined result, represented in forest plots by a diamond, showed that steroid usage is effective at reducing the complications associated with premature birth.

Hawthorne effect

The Hawthorne effect describes how knowledge of being observed can change behaviour. For example, if participants in an observation study of civility among school children know they are being observed, they might act in ways they feel will present them in the best light (as polite and helpful), and which they might not have acted if they were not being studied, or if they did not know they were being studied.

Impact evaluation

Evaluates or measures whether an intervention has an impact on the outcome of interest and what this impact was. So for instance, this would involve determining whether a trial of Cognitive Behavioural Therapy had an impact on reoffending rates among the treatment group, and would also involve an evaluation of the nature of the impact, i.e. whether it increased or reduced reoffending, and in what ways this happened.

Intention-to-treat (ITT)

Intention-to-treat analysis calculates the average effect by including both those who completed the intervention and those who dropped out. This prevents bias caused by participant drop out and reduces the effect of attrition.

Internal validity

Internal validity refers to the extent to which the design and conduct of a trial eliminates bias. This can refer to a number of methods including blinding and allocation concealment.


The intervention is, put simply, the thing that is done to a person or persons, as part of a study or trial. In the case of medical trials, common forms of intervention are drugs, preventative treatments and lifestyle changes. In the field of criminal justice, interventions can include therapeutic treatments, changes in the kinds of sanctions given for particular crimes, and preventative measures such as restorative justice conferencing for low level anti-social behaviour, with the aim of preventing more serious offending in the future. The success or otherwise of the intervention in this case would be measured by subsequent rates of offending in the group that took part in a restorative justice conference, as compared with the control or comparison group who did not.

John Henry effect

The John Henry effect describes participants in the control group who make additional effort to compensate for not being in the treatment group. An example of this might be where a participant in the control group in a trial of a drug designed to improve cardiovascular health might decide to increase his or her levels of exercise, which might have a positive impact on his or her cardiovascular health.

Matched-pairs design

A matched pair design is a type of controlled trial where individuals are matched in both the control and treatment group, according to defined characteristics such as age, ethnicity, IQ, offence. This can be time-consuming, but it can help to reduce differences between the two groups.


A meta-analysis is a rigorous analysis of all existing analyses of a particular intervention. It is a statistical method of combining the results of separate primary studies on a particular intervention to estimate the size of an effect in the population. Meta-analyses are typically conducted as part of a systematic review.

Moderator variable

A moderator variable (m) is a third variable that affects the strength of the relationship between a dependent variable (x) and independent variable (y). If statistically significant, m can amplify or weaken the relationship between x and y. For example, in a study of which types of car are more likely to be involved in collisions, a moderator variable would be the age of the driver.

Odds ratio

The odds ratio reports the odds (probability) of an event (e.g. reoffending) occurring in the treatment group relative to the control or comparison group. For example, if the odds of reoffending in the treatment group is 0.70 and the odds of reoffending in the control group is 0.13, then the odds ratio is 0.70/0.13 = 5.45. This indicates that the odds of reoffending are 5 times higher in the treatment group than the control group. An odds ratio of 1 indicates that the odds of a particular outcome are the same in both groups.

Outcome measure

An outcome measure is the way in which the success or otherwise of a particular intervention can be assessed and measured. There are different measures that relate to the nature of the outcome and the kind of data we can used to understand it. A binary outcome on a trial of therapy for offenders might be response to the question: did the participant go on to reoffend – yes or no? A continuous outcome on the same trial might ask the question: how many times did the participant offend over a two-year period? This cannot be answered with a simple yes or no, but is expressed as a continuous variable, i.e. as a number that could potentially go on without limit. A dichotomous outcome measure combines aspects of both binary and continuous outcomes, and might ask the question: did the participant offend one, or more than once? The answer is binary, in that it is two answers, but these do not limit the results to only two outcomes, because either the participant offended once, or they offended any number of times.


This term relates to the ability to detect whether an intervention has had an effect.

Process evaluation

Monitors whether an intervention was carried out as intended, or whether any bias was introduced into the study, by for instance a failure to conduct proper allocation concealment; contamination on the part of those taking part, or simply through attrition.

Propensity Score Matching

Propensity Score Matching is a statistical technique for constructing comparison groups so that they effectively match treatment groups.

Publication bias

This can occur when studies with a negative or null effect are under-represented in the peer-reviewed literature, leading to an over-estimation of the positive effects of a particular intervention.

Quasi-experimental design

Evaluations that adopt a quasi-experimental design-construct comparison group using statistical techniques rather than a random assignment to ensure that the characteristics of the group are similar to those exposed to the intervention.

Random assignment

Random assignment ensures that treatment and control groups are similar in both observable and unobservable characteristics by allocating participants on a chance basis. Bias can be reduced in this method by the introduction of binding and/or allocation concealment.

Randomised Controlled Trial

In a Randomized Control Trial participants are randomly assigned to treatment (the intervention) and control (placebo or ‘business as usual’) groups. Random assignment ensures a high degree of confidence that there are no systematic differences between treatment and control groups, except that the treatment group participated in the intervention. By eliminating selection bias, we can be confident that any difference in group outcomes is the result of the intervention, not a characteristic of the groups. Equivalence between the treatment and control group is necessary for any causal claims about the effect of the intervention on levels of reoffending.

Rapid Evidence Assessment

A Rapid Evidence Assessment is an accelerated systematic review. It adopts a transparent and rigorous method but has a less exhaustive search strategy.

Sample size

The number of participants involved in a trial.

Selection Bias

Selection bias occurs when observable and unobservable characteristics are matched with treatment assignment. If selection bias is present, it is not possible to disentangle the effect of an intervention from pre-existing differences between those exposed to it and those who aren’t. For example, consider a trial of a post-conviction training programme which aims to increase mindfulness among eighteen to twenty years olds convicted of carrying a knife. If most of those in the treatment group, who take the course, were first time offenders, whilst most of those in the control group were long-term offenders with histories of violence, then the results will be biased, and might suggest that mindfulness is more effective at reducing knife carrying than it actually is. Conversely, if the two groups were reversed, then mindfulness training might seem less effective than it actually is.


The Stable-Unit-Treatment-Value Assumption (SUTVA) states that the effect of the treatment condition on each unit (treatment or control) must be independent of the effects of treatment on any other units. If the SUTVA is violated, a participant’s outcome may be the result of their response to interference with other participants (treatment or control) rather than to the treatment itself. As a result, no casual claims about the effect of the treatment can be made if the treatment status of other participants affects the outcomes of individual participants.

Systematic review

A review of primary studies that adopts explicit and reproducible methods to minimise bias.

Treatment group

The participants in a trial who receive the intervention.

Treatment on the treated (TOT)

The average effect only on those who actually received the intervention.

Reducing Reoffending Interventions

More Information

Reducing Reoffending Methodology

More Information

Reducing Reoffending Training and Events

More Information