Introduction

Making sure that we use the most appropriate method to answer a particular research or evaluation question is one of the most important parts of any project we undertake. We spend time discussing methods options with stakeholders and sometimes undertake a ‘Feasibility Study’ prior to starting a research project.

Methodological innovation is at the heart of what we do and we enjoy teaching and writing about methods. We teach methods to both undergraduate and post-graduate students at Manchester Metropolitan University and our writing on methods includes textbooks and articles published in academic journals.

On this page we describe some of the methods that we use more commonly.

Evaluation

Most of our evaluations start by examining the Theory of change that underpins the policy or programme we are evaluating. This helps people we work with refine their intervention and ensures that we measure the things that really matter during the evaluation.

Many of our evaluations also involve a Process Evaluation. Process evaluations help stakeholders understand implementation of a policy or programme and a key early question is often around fidelity – the extent to which a policy or programme is implemented consistent with its developer’s intentions. Process evaluations also seek to explain why a policy or programme did or didn’t work.

Impact evaluation is at the heart of what we do. Where practical we favour evaluation designs that involve a comparison or control group and we are a leading research group for the design and delivery of Randomised Controlled Trials in social policy settings. However, we also recognise the value of in-depth analysis of individual cases to help us understand why an intervention had the impact it did.

Demonstrating the impact or otherwise of a policy or programme is important but does not, in and of itself, provide a justification for further investment. Economic evaluations do this. The most common forms of economic evaluation we undertake are Cost Effectiveness Analysis and Cost Benefit Analysis.

Economic Analysis & Modelling

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Randomised Controlled Trials

PERU undertakes randomised field trials (RCTs or social experiments) in order to evaluation the impact of social interventions. We deliver trials ourselves and also support others conducting experimental studies.  We can design experimental samples, organize the delivery of experimental evaluations and carry out advanced statistical analysis of data from randomized studies.  Much of our work is mixed methods, in that we carry out research which combines practical approaches to randomization integrated with non-experimental research (such as qualitative research) in order to not just identify whether or not an intervention works, but how it works and why. 

Surveys & Longitudinal Data

We have extensive experience of designing, delivering and analysing surveys including:

  • Questionnaire Design
  • Survey Piloting
  • Validating survey instruments
  • Computer-Assisted Personal Interview (CAPI) development
  • Sampling Methodologies
  • Data Analysis using SPSS, STATA and NVIVO
  • Reporting and Presentation

We have particular expertise in international comparative and longitudinal survey methodologies, including designing birth cohort studies.

Systematic Review & Rapid Evidence Assessment

Often policy-makers and those designing specific interventions want to start by reviewing the existing evidence base.A systematic review ensures that all available evidence is taken into account and that the methodological rigour of that evidence is factored into the review. This way the evidence assessment can avoid being unduly influenced by poor quality research. A formal systematic review can take a long time to complete so we often use the Rapid Evidence Assessment (REA) methodology to produce high-quality evidence assessments in a timescale that fits the needs of policy-makers. Where possible we synthesise high-quality quantitative data using meta-analysis.There is increasing interest in alternative approaches to synthesising research evidence and to developing approaches designed for the synthesis of qualitative research. We are familiar with a number of these including Realist Review, and have used them in a number of settings.

Big Data Centre

Big data affords exciting new research opportunities, opening prospect of insight in to multiple policy issues that have hitherto remained impenetrable with traditional data sources. However, the sheer scale and high-dimensionality of big data pose significant methodological challenges, such as scalability, noise and spurious correlation. To address these challenges, there is a need for new statistical thinking and methodological development. We focus on advancing novel methodological approaches in geostatistics, machine learning and Agent-based simulation (ABM) to enable us to unlock the potential of big data to shed light on social policy problems.