Evaluation

Economic Analysis & Modelling

Randomised Controlled Trials

Surveys & Longitudinal Data

Systematic Review & Rapid Evidence Assessment

Big Data Centre

Our Expertise


Contact our team to find out more peru@mmu.ac.uk

In undertaking evaluations, our evaluation designs often seek to answer four main research questions ...

  • Should it work? (Theory of change) What is the underlying ‘theory of change’ which explains how the project will make an impact? An understanding of the theory of change that underpins the project will ensure that we measure the things that really matter during the evaluation.
  • Can it work? (Process/Implementation evaluation) How was the project implemented? Has the project been properly implemented? What were the challenges to implementation and how were they overcome?
  • Does it work? (Impact evaluation) Many of our evaluations investigate the impact of the intervention. For example for a criminal justice intervention, long-term impact might be assessed using a measure of recidivism with data drawn from the Police National Computer (PNC). Where practical we favour evaluation designs that involve a comparison or control group. However, we also recognise the value of in-depth qualitative data to help us understand why an intervention had the impact it did.
  • Is it worth it? (Economic evaluation) It is anticipated that, if successful, projects/interventions might receive a wider roll-out. It will therefore be important to consider whether such an approach is cost effective and cost-beneficial. The methodology used will depend in part on the results of the impact evaluation. We use a range of approaches including cost effectiveness analysis, cost benefit analysis, break even analysis, and Social Return on Investment (SROI).


Contact our team to find out more peru@mmu.ac.uk

To undertake an Economic Analysis, we work with organisations using a Costing methodology, which captures three main types of cost ...

  • Direct project expenditure
  • Estimates of the value of public resources used by the intervention
  • Estimates of the costs of services or facilities used by the intervention that are free or discounted

These cost data can them form the basis for a variety of economic analyses including Social Return on Investment (SROI) and long-term models of economic return. Two methodologies we use regularly are Cost-Benefit Analysis (CBA) or a Break-even Analysis.

Cost-Benefit Analysis – is an assessment of the benefit-cost ratio associated with any intervention requires a clear understanding of the intervention being examined, in particular how it achieves a particular outcome. To calculate the cost-benefit ratio the following three pieces of data are then required;

  • The extra outcome achieved by the intervention compared with the alternative intervention (the incremental effect, which will come from the impact evaluation).
  • The economic value of these outcomes, which will be derived from existing, published studies; and
  • The extra cost of implementing the intervention compared with an alternative intervention (the incremental cost).

Break-even analysis – In the event that the impact evaluation is inconclusive and we are unable to identify an incremental effect for the intervention we could use a break-even analysis. This demonstrates how the costs saved as a result of the intervention vary depending upon the impact the intervention has. This can be used to determine the minimum effect required for the costs saved as a result of the intervention to outweigh the cost of the intervention. We can then use existing published evidence (from a similar project) to assess whether the project has the potential to achieve the level of impact required for it to ‘break-even’.


Contact our team to find out more peru@mmu.ac.uk

PERU undertakes randomized field trials (RCTs or social experiments) in order to evaluate the impact of social interventions.

We deliver trials ourselves and also support others conducting experimental studies.  We can design experimental samples, organize the delivery of experimental evaluations and carry out advanced statistical analysis of data from randomized studies.  Much of our work is mixed methods, in that we carry out research which combines practical approaches to randomization integrated with non-experimental research (such as qualitative research) in order to not just identify whether or not an intervention works, but how it works and why.  Our work on randomized interventions studies is led by Professor Stephen Morris who is a member of the UK governments’ trials advisory panel.  Andrew Smith, Dr Kirstine Szifris, and Dr Zsolt Kiss are part of our team currently delivering or advising on randomized studies for Novus Group, NatCen Social Research and the Education Endowment Foundation.

If you are interested in the possibility of running an experimental evaluation of an intervention and would like advice and support please do get in touch with Professor Stephen Morris.


Contact our team to find out more peru@mmu.ac.uk

We are experienced at all stages of undertaking surveys

  • Questionnaire Design
  • Survey Piloting
  • Computer-Assisted Personal Interview (CAPI) development
  • Sampling Methodologies
  • Focus Groups
  • Survey Project Management
  • Large Quantitative and Qualitative Surveys
  • Large Fieldwork (face to face and telephone)
  • Data Analysis using SPSS, STATA and NVIVO
  • Reporting and Presentation

Our most recent surveys include;

the FP7 MYPLACE UK Survey in Coventry and Nuneaton in the West Midlands. We have completed almost 1,200 45 minute face-to-face interviews with young people aged 16-25.

we are leading the FP7 MYWEB project that is exploring the feasibility of a pan-European longitudinal survey to provide policy-makers and researchers with a better understanding of children and young people’s wellbeing.

we are designing a rail passenger survey as part of a Technology Strategy Board funded project called Rail Incident Manager.

the ‘Legal Highs’ survey in nightclubs in Manchester involved interviewing approximately 1,500 clubbers on their experience of legal highs.


Often policy-makers and those designing specific interventions want to start by reviewing the existing evidence base.

A systematic review ensures that all available evidence is taken into account and that the methodological rigour of that evidence is factored into the review. This way the evidence assessment can avoid being unduly influenced by poor quality research. A formal systematic review can take a long time to complete so we often use the Rapid Evidence Assessment (REA) methodology to produce high-quality evidence assessments in a timescale that fits the needs of policy-makers. Where possible we synthesise high-quality quantitative data using meta-analysis.

There is increasing interest in alternative approaches to synthesising research evidence and to developing approaches designed for the synthesis of qualitative research. We are familiar with a number of these and have used them in a number of settings.