This page provides definitions to explain the categorisations used on research and evaluation studies hosted on the resource hub.

Evaluation design

Randomised Control Trial: A study where people are randomly assigned to two or more groups, one receiving the intervention being tested, the other (the comparison or control group) has an alternative intervention or no intervention at all.

Quasi-Experimental: A study where the comparison group has not been created at random. Statistical techniques are used to allow outcomes between two groups to be compared even where there may be differences between the characteristics of those groups. This gives an indication of whether an intervention has been effective. Common examples include Regression Discontinuity Design, Interrupted Time Series, Difference in Differences and Propensity Score Matching.

Theory Based: A way to draw conclusions about a programme’s effectiveness in the absence of any comparison group. This usually involves establishing a theory of change and collecting evidence to understand whether a programme is working the way it is expected to, including how and why it worked.

Pre-Post Outcome Evaluation: Measuring outcomes before and after intervention. Either with no comparison group or a non-matched comparison group e.g. children who don’t take part in the programme may have different characteristics to the children who do take part in the programme meaning any differences in their outcomes may not be due to the programme.

Qualitative Outcome Evaluation: Perceived outcomes are measured through qualitative data collection methods.

Process Evaluation: Evaluation to understand how the programme was delivered and implemented, how it was experienced by users, as well as what worked well and not so well.

Evidence Review: Bringing together findings from an umber of different studies on the same subject. Common examples include meta-analysis, systematic review and rapid evidence assessments.

Does It Work?

Untested, new or innovative: Where no formal evaluation or only process evaluation has been used

Promising: Where qualitative outcome evaluation, pre-post or theory-based evaluation has been used, and some benefit or potential benefit identified.

Effective for one or more outcomes: Where a Randomised Control Trial or Quasi-Experimental Design has been used and beneficial effects identified.

Not effective for any outcomes measured: Where any outcome or impact evaluation design has been used and findings show no benefit.

Harmful: Outcomes measured show detrimental effects to children.

Outcomes

Outcomes for which the programme has been found to have a positive effect.

Population and sample size

Details about the size and characteristics of the sample, including details of any control group.

Methods

A summary of the data collection methods used in the study.

Findings

A summary of the research or evaluation findings.

Strength of Evidence

A summary of the key strengths and limitations of the study.

What is Quantitative and Qualitative Data?

Quantitative data refers to numbers and measurements. This is data you can count or quantify. This tells you how much or how many of something there is. For example, how many children have remained in education in one group compared to another.

Qualitative data uses words, images, or observations to explain experiences, feelings, or opinions. This helps you understand why or how something happened. For example, the reasons why some children remained in education while some did not.