Evaluation

Economic Analysis & Modelling

Randomised Controlled Trials

Surveys & Longitudinal Data

Systematic Review & Rapid Evidence Assessment

Big Data Centre

Methods

We are a methods-led research and evaluation unit. We deploy a wide range of methods to help policy-makers answer important questions, from better understanding people’s needs to finding out whether a policy worked.

Making sure that we use the most appropriate method to answer a particular research or evaluation question is one of the most important parts of any project we undertake. We spend time discussing methods options with stakeholders and sometimes undertake a ‘Feasibility Study’ prior to starting a research project.

Methodological innovation is at the heart of what we do and we enjoy teaching and writing about methods. We teach methods to both undergraduate and post-graduate students at Manchester Metropolitan University and our writing on methods includes textbooks and articles published in academic journals.

Our Expertise


Contact our team to find out more peru@mmu.ac.uk

In undertaking Evaluation, we often seek to answer four main research questions:

  • Should it work? (Theory of change) What is the underlying ‘theory of change’ which explains how the project will make an impact? An understanding of the theory of change that underpins the project will ensure that we measure the things that really matter during the evaluation.
  • Can it work? (Process/Implementation evaluation) How was the project implemented? Has the project been properly implemented? What were the challenges to implementation and how were they overcome?
  • Does it work? (Impact evaluation) Many of our evaluations investigate the impact of the intervention. For example for a criminal justice intervention, long-term impact might be assessed using a measure of recidivism with data drawn from the Police National Computer (PNC). Where practical we favour evaluation designs that involve a comparison or control group. However, we also recognise the value of in-depth qualitative data to help us understand why an intervention had the impact it did.
  • Is it worth it? (Economic evaluation) It is anticipated that, if successful, projects/interventions might receive a wider roll-out. It will therefore be important to consider whether such an approach is cost effective and cost-beneficial. The methodology used will depend in part on the results of the impact evaluation. We use a range of approaches including cost effectiveness analysis, cost benefit analysis, break even analysis, and Social Return on Investment (SROI).

Most of our evaluations start by examining the Theory of change that underpins the policy or programme we are evaluating. This helps people we work with refine their intervention and ensures that we measure the things that really matter during the evaluation.

Many of our evaluations also involve a Process Evaluation. Process evaluations help stakeholders understand implementation of a policy or programme and a key early question is often around fidelity – the extent to which a policy or programme is implemented consistent with its developer’s intentions. Process evaluations also seek to explain why a policy or programme did or didn’t work.

Impact evaluation is at the heart of what we do. Where practical we favour evaluation designs that involve a comparison or control group and we are a leading research group for the design and delivery of Randomised Controlled Trials in social policy settings. However, we also recognise the value of in-depth analysis of individual cases to help us understand why an intervention had the impact it did.

Demonstrating the impact or otherwise of a policy or programme is important but does not, in and of itself, provide a justification for further investment. Economic evaluations do this. The most common forms of economic evaluation we undertake are Cost Effectiveness Analysis and Cost Benefit Analysis.

View Projects & Outputs

Contact our team to find out more peru@mmu.ac.uk

We carry out microsimulation modelling to estimate the fiscal, distributional and poverty effects of tax and benefit policy and economic analysis to evaluate the costs and benefits of individual projects:

Microsimulation modelling

PERU maintains and develops the IPPR Tax-Benefit Model, which is used by most major think-tanks in the UK to estimate the fiscal, distributional and poverty effects of tax and benefit policy. Model users include the Institute of Public Policy Research, Resolution Foundation, Centre for Social Justice, Joseph Rowntree Foundation, New Economics Foundation, Legatum Institute and Fraser of Allander Institute.

PERU also provides expert advice to the Health Foundation on its project to build a dynamic microsimulation model to estimate future demand for healthcare.

Economic analysis of projects

We work with organisations using a Costing methodology, which captures three main types of cost:

  • Direct project expenditure
  • Estimates of the value of public resources used by the intervention
  • Estimates of the costs of services or facilities used by the intervention that are free or discounted

These cost data can then form the basis for a variety of economic analyses including Social Return on Investment (SROI) and long-term models of economic return. Two methodologies we use regularly are Cost-Benefit Analysis (CBA) or a Break-even Analysis.

Cost-Benefit Analysis – This is an assessment of the benefit-cost ratio associated with any intervention and requires a clear understanding of the intervention being examined, in particular how it achieves a particular outcome. To calculate the cost-benefit ratio the following three pieces of data are then required;

  • The extra outcome achieved by the intervention compared with the alternative intervention (the incremental effect, which will come from the impact evaluation)
  • The economic value of these outcomes, which will be derived from existing, published studies; and
  • The extra cost of implementing the intervention compared with an alternative intervention (the incremental cost)

Break-even analysis – In the event that the impact evaluation is inconclusive and we are unable to identify an incremental effect for the intervention we could use a break-even analysis. This demonstrates how the costs saved as a result of the intervention vary depending upon the impact the intervention has. This can be used to determine the minimum effect required for the costs saved as a result of the intervention to outweigh the cost of the intervention. We can then use existing published evidence (from a similar project) to assess whether the project has the potential to achieve the level of impact required for it to ‘break-even’.

View Projects & Outputs

Contact our team to find out more peru@mmu.ac.uk

PERU undertakes randomized field trials (RCTs or social experiments) in order to evaluate the impact of social interventions:

We deliver trials ourselves and support others conducting experimental studies.  We can design experimental samples, organize the delivery of experimental evaluations, and carry out advanced statistical analysis of data from randomized studies.  Much of our work is mixed methods, in that we carry out research which combines practical approaches to randomization integrated with non-experimental research (such as qualitative research) to not just identify whether or not an intervention works, but how it works and why.  Our work on randomized interventions studies is led by Professor Stephen Morris who is a member of the UK governments’ evaluation advisory panel.  Dr Will Cook, Sandor Gellen, and Dr Karolina Krzemieniewska-Nandwaniare are part of our team, working with partners from across the university (for example Professor Cathy Lewin and Dr Steph Ainsworth from ESRI), delivering or advising on randomized studies for a number of the UK Government’s What Works Centres.

PERU undertakes randomised field trials (RCTs or social experiments) in order to evaluate the impact of social interventions. We deliver trials ourselves and also support others conducting experimental studies. We can design experimental samples, organize the delivery of experimental evaluations and carry out advanced statistical analysis of data from randomized studies. Much of our work is mixed methods, in that we carry out research which combines practical approaches to randomization integrated with non-experimental research (such as qualitative research) in order to not just identify whether or not an intervention works, but how it works and why.

If you are interested in the possibility of running an experimental evaluation of an intervention and would like advice and support please do get in touch with Professor Stephen Morris.

View Projects & Outputs

Contact our team to find out more peru@mmu.ac.uk

We have extensive experience of designing, delivering and analysing surveys including:

  • Questionnaire Design
  • Survey Piloting
  • Computer-Assisted Personal Interview (CAPI) development
  • Questionnaire translation
  • Sampling Methodologies
  • Focus Groups
  • Survey Project Management
  • Secondary data analysis
  • Administrative data analysis (including with linked survey data)
  • Horizon scanning
  • Data Analysis using SPSS, STATA, R and NVIVO
  • Reporting and Presentation

Selected Projects:

2024 – 2027 EU Horizon Europe LEARN project (101132531)

  • International comparative longitudinal data analysis
  • Linked survey and administrative data analysis

2022-2027 EU Horizon Europe GUIDEPREP project (101078945)

  • International longitudinal survey infrastructure development

2020 – 2025 EU Horizon 2020 COORDINATE project (101008589)

  • International pilot survey of child wellbeing questionnaire
  • Development of international community of researchers focussing on child wellbeing
  • International mobility and capacity building of quantitative researchers examining child wellbeing

2018 – 2021 EU Horizon 2020 MiCreate project (822664)

  • Questionnaire design
  • Field data collection
  • Data analysis and reporting

2017 – 2019 EU Horizon 2020 ECDP project (777449)

  • Research design of a pan European longitudinal survey of child wellbeing
  • Questionnaire design
  • Fieldwork plan
  • Sample design
  • Ethical framework

2013 – 2015 EU FP7 MYWEB project (SSH FP7-613368)

  • Feasibility of a pan-European longitudinal survey to provide policy-makers and researchers with a better understanding of children and young people’s wellbeing.
  • Delphi study of experts across Europe
  • Development of child wellbeing questions for 7 and 8 year olds, including cognitive interviewing

2011 – 2015 EU FP7 MYPLACE project (SSH FP7-266831)

  • Coordination of questionnaire survey fieldwork in 15 European Countries
  • Focus groups of young people
  • Semi-structured interviews of young people

View Projects & Outputs

We often use the Rapid Evidence Assessment (REA) methodology to produce high-quality evidence assessments in a timescale that fits the needs of policy-makers:

Often policy-makers and those designing specific interventions want to start by reviewing the existing evidence base. A systematic review ensures that all available evidence is taken into account and that the methodological rigour of that evidence is factored into the review. This way the evidence assessment can avoid being unduly influenced by poor quality research. A formal systematic review can take a long time to complete so we often use the Rapid Evidence Assessment (REA) methodology to produce high-quality evidence assessments in a timescale that fits the needs of policy-makers. Where possible we synthesise high-quality quantitative data using meta-analysis. There is increasing interest in alternative approaches to synthesising research evidence and to developing approaches designed for the synthesis of qualitative research. We are familiar with a number of these and have used them in a number of settings.

View Projects & Outputs

Contact our team to find out more peru@mmu.ac.uk

We focus on advancing novel methodological approaches in geostatistics, machine learning and agent-based simulation (ABM) to enable us to unlock the potential of big data:

Big data affords exciting new research opportunities, opening prospect of insight in to multiple policy issues that have hitherto remained impenetrable with traditional data sources. However, the sheer scale and high-dimensionality of big data pose significant methodological challenges, such as scalability, noise and spurious correlation. To address these challenges, there is a need for new statistical thinking and methodological development. We focus on advancing novel methodological approaches in geostatistics, machine learning and Agent-based simulation (ABM) to enable us to unlock the potential of big data to shed light on social policy problems.

View Projects & Outputs