Explanations of commonly used terms related to education evidence and research

Key concepts

In education, action research typically refers to a cycle of reflective inquiry to understand and improve practice. The cycle typically involves the steps of identifying the problem, developing a research plan, collecting and analysing data, incorporating findings into planning, implementing actions, and monitoring and evaluation.

An approach is the term AERO uses to refer to a practice, program or policy.

An association is when there is a relationship between two elements, factors or events, but the association cannot be proved or explained. Associations can be positive (for example, higher socioeconomic status is associated with higher student achievement) or negative (for example, higher student absenteeism is associated with lower student achievement).

A baseline is information from an initial point in time, often used for comparison to see how things change over time.

Causation is when one element, factor or event is known to cause another (for example, a particular teaching practice is known to lead to improvements in student test scores). To prove causation between two things (let’s call them A and B), researchers need to show: 1. that there is an association between A and B; 2. that A happens before B; and 3. that B is not caused by a third thing (that is, C or D). In education settings, proving causation is often challenging because of the many influences on teacher and student outcomes.

A comparison group is a group of people in a research study whose responses or outcomes function as a comparison against which the effect of the approach being tested can be measured. Comparison groups receive a different treatment to the group receiving the treatment or approach being tested. There can be any number of comparison groups in a study. A comparison group is called a ‘control group’ when it receives no treatment at all.

Context is the social, cultural and environmental factors found in research settings. Taking context into account in research studies is important because context can affect the outcomes of research (i.e. evidence generated in one context may not necessarily apply to a different context). Evidence is most relevant when it has been generated in a context similar to the context in which it will be applied. Examples of ‘context’ may include location, demographics of research participants, or the level of organisational support for the particular approach being researched.

Data is information that is collected and analysed in order to produce findings and/or to inform decision-making. Data can be qualitative (for example, teacher observations or quotes from students) or quantitative (for example, student test scores or attendance data).

An educational approach is effective if it causes (see causation above) a desired change in a particular outcome. This desired change can be an increase in an outcome (for example, increases in student achievement) or it can be a decrease in an outcome (for example, reduction in student absenteeism).

Empirical research presents observable data to substantiate its claims. This data may be primary data (observation and measurement of phenomena or events directly experienced by the researcher) or secondary data (data that has already been collected by other researchers).

Evaluation is the systematic and objective assessment of an approach. Evaluation provides evidence of what has been done well, what could be done better, the extent to which objectives have been achieved and/or the impact of the approach. This evidence can then be used to inform ongoing decision-making regarding the approach.

Evidence is any type of information that supports an assertion, hypothesis or claim. There are many types of evidence in education, including insights drawn from child or student assessments, classroom observations, recommendations from popular education books and findings from research studies and syntheses. AERO refers to two types of evidence in its work:

  • research evidence: This is academic research, such as causal research or synthesis research, which uses rigorous methods to provide insights into educational practice.
  • practitioner-generated evidence: This is evidence generated through practitioners in their daily practice (for example, teacher observations, information gained from formative assessments or insights from student feedback on teacher practice).

Evidence-based practices are educational approaches that are backed up by research evidence. This means there is broad consensus from rigorously conducted evaluations that they work. 

Evidence-informed practice is an educational approach that is applied using evidence from research together with a practitioner’s professional expertise and judgement. The expertise and judgement used by practitioners can be based on knowledge or understanding of their children and students, or the environment in which they work.

Experimental design is the process of planning an experiment that can establish a ‘cause and effect’ relationship (that is, an experiment to determine the specific factors that influence an outcome). Experimental designs account for all other factors that could influence an outcome, so the cause of an effect can be isolated.

Findings from a piece of research are generalisable if they are:

  • a fair representation of trends in the wider population from which the study participants were sampled and/or
  • applicable to settings or contexts other than those in which the study was conducted.

Hierarchies of evidence are sometimes used to rank evidence according to rigour, helping people to compare and evaluate the quality of different types of research evidence. The higher up on the hierarchy, the more rigorous the methodology. Instead of a hierarchy, AERO uses Standards of evidence – a continuum of four levels of confidence along which rigour and relevance increase.

history effect is the descriptive term for influences that occur at the same time as an approach is being evaluated and/or influences that occur between the approach being implemented and the outcomes being measured. For example, researchers may want to know the effect of a particular teacher’s writing program on student writing test scores. However, to do this they need to separate the effects of any influences that occur simultaneously (for example, other teachers using different writing strategies with these students) and/or those that occur in the two weeks between the implementation of the writing program and the writing test (for example, a whole-school writing celebration).

To hypothesise is to put forward an assumption or idea so that it can be tested to see whether it might be true.

An approach that is applied to address a problem is sometimes referred to as an intervention; for example, a teacher may implement a certain early literacy intervention to support struggling readers. In a research study, an intervention is the approach that is being investigated, tested or evaluated. An intervention is sometimes called a ‘treatment’ in a research study.

literature review identifies, evaluates and synthesises the relevant literature within a particular field of research. It usually discusses common and emerging approaches, notable patterns and trends, areas of conflict and controversies, and gaps within the relevant literature. Literature reviews do not usually explicitly state the methods used to identify, evaluate or synthesise the relevant literature.

Maturation effects are the effects in a setting where an approach is applied that occur naturally (that would have occurred anyway), as opposed to the effects that occur as a result of the approach. For example, researchers may want to know the effect of a particular educational program on student social and emotional skills. However, social and emotional skills develop over time as children mature, and so researchers need to distinguish between the effect of the educational program and the effects of natural development as a result of students getting older.

meta-analysis uses statistical methods to summarise the results of individual studies. It is designed to assess the behaviours that lead to a particular approach working and/or to provide an estimate of how much more likely one approach is to work over another. It is the quantitative version of a literature review or systematic review.

Mixed-methods research is research that uses both qualitative (non-numerical data) and quantitative (numerical data) research methods.

monograph is an academic piece of writing on a single subject or aspect of a subject that presents the findings of primary research and/or original scholarship. It is usually written by one person.

An outcome measure is an observation that can be used to measure the effect of a particular approach. Outcome measures can be qualitative (such as quotes or observations) or quantitative (such as test scores). For example, when examining whether a particular approach helps students understand a concept, a teacher could set an assessment. The student assessment score could then be used as an outcome measure of student understanding.

Peer review is the assessment of research by others working in the same or a related field. The assessment is based on the expertise and experience of the researcher undertaking the review and should be impartial and independent.

pilot study, pilot project or pilot experiment is a small-scale trial that is conducted in order to test the effects of an approach before implementing it on a larger scale. A pilot project can also help to determine feasibility, cost, adverse events and necessary improvements to the approach.

If a study shows that an approach leads to the desired outcome, it is said to have a positive effect. Conversely, if a study shows that an approach has the opposite of the desired outcome, it is said to have a negative effect.

A primary study is an individual study which reports on data collected and analysed by the researchers themselves. Primary studies are designed according to the type of research question being answered - for example, they may use qualitative methods, quantitative methods, or be mixed-methods research. The findings from a number of primary studies may be synthesised in meta-analyses, systematic reviews, rapid reviews or literature reviews.

Qualitative methods involve collecting and analysing non-numerical data, and may include observations, interviews, questionnaires, focus groups, and documents and artifact analysis. Qualitative methods can be used to understand concepts, opinions or experiences as well as to gather in-depth insights into a problem or generate new ideas.

Quantitative methods involve collecting and analysing numerical data. Quantitative methods are generally used to find patterns and averages, make predictions, test causal relationships and generalise results to wider populations.

quasi-experimental design is a research methodology that aims to establish a ‘cause and effect’ relationship (that is, to determine the specific factors that influence an outcome), but it cannot completely eliminate all factors that could influence an outcome (that is, there may still be an element of subjectiveness in the findings).

randomised controlled trial is a trial of a particular approach that is set up in such as a way that allows researchers to test its effects. In a randomised controlled trial, subjects are randomly assigned to one of two groups: one receiving the approach) that is being tested (the experimental group), and the other receiving an alternative approach or no approach (the comparison group or control). After the trial period, differences between the groups can be attributed to the approach being tested. Researchers and teachers who use randomisation must take into account ethical concerns, such as whether it is ethical to withhold treatment from subjects in the comparison group.

A rapid review is an evidence-based (or ‘objective’) approach to searching and synthesising research evidence. It uses similar steps to a systematic review, but simplifies or skips some steps so that findings can be reached more quickly (making them more current). Rapid reviews answer a precise, clearly defined question and explicitly outline: the methods for data collection, the methods for data extraction, the number of papers included in the review and the methods for data analysis.

Relevant evidence is evidence produced in contexts that are similar to one’s own context. Evidence can also be considered relevant when it is derived from a large number of studies conducted over a wide range of contexts.

Research is ‘the creation of new knowledge and/or the use of existing knowledge in a new and creative way so as to generate new concepts, methodologies, inventions and understandings’ (Australian Research Council, 2015). There are many types of research. For example:

  • exploratory research involves investigating an issue or problem. It aims to better understand this problem and sometimes leads to the formation of hypotheses or theories about the problem.
  • descriptive research describes a population, situation or event that is being studied. It focuses on developing knowledge about what exists and what is happening.
  • causal research (also known as ‘evaluative research’) uses experimentation to determine whether a cause-and-effect relationship exists between two or more elements, features or factors.
  • synthesis research combines, compares and links existing information to provide a summary and/or new insights or information about a given topic.

Research methods are the methods used to conduct research. Research methods are generally classified as ‘qualitative’ or ‘quantitative’. When both methods are used, it is referred to as ‘mixed methods’ research. Qualitative methods involve collecting and analysing non-numerical data (such as  observations, interviews, questionnaires, focus groups, documents and artifacts). Qualitative methods can be used to understand concepts, opinions or experiences as well as to gather in-depth insights into a problem or generate new ideas. Quantitative methods involve collecting and analysing numerical data. Quantitative methods are generally used to find patterns and averages, make predictions, test causal relationships and generalise results to wider populations.

Evidence is considered rigorous when it proves that a particular approach causes a particular outcome. Rigorous evidence is produced by using specialised research methods that can identify the impact of one particular influence. The most common research method used to produce rigorous evidence is the randomised controlled trial. However, there are many other methods that can produce rigorous evidence, whether qualitative, quantitative or mixed methods. What is important in producing rigorous evidence is that the research method can rule out the effects of as many other influences as possible.

rubric is a set of criteria that can be used to make consistent judgements. In education settings, a rubric is usually used to help assess learning or development in a particular area. 

When studying a large population, it is not possible to include every individual. Research studies usually include a certain number of individuals to represent the population. Those that are included in the study are referred to as a sample of the population. Sample size refers to the number of people in a sample. Generally, the larger the sample size, the more accurate the research findings. If a sample is too small, it will not provide a fair picture of the whole population.

Selection bias is when the sample in a study does not represent the general population. Selection bias can occur in two ways: 1. when individuals selected in a research study have characteristics that make them different to the general population; or 2. when individuals opt into a research study and have characteristics different to the general population. Selection bias can affect the outcome of a study, as it is possible that any effect detected by the research is due to the specific characteristics of the sample, rather than the approach itself.

Seminal research is a term used to describe studies that are recognised within a particular discipline as presenting an idea of significant and enduring importance or influence.

A data analysis result is statistically significant when it is likely to be true, rather than by chance. Researchers often use statistical significance to describe the confidence in their results. 

systematic review is an evidence-based (or ‘objective’) approach to a literature review. Systematic reviews answer a precise, clearly defined question to produce evidence to underpin a piece of research. A systematic review must explicitly outline: the methods for data collection, the methods for data extraction, the number of papers included in the review, and the methods for data analysis.

Validation is the process of determining whether the way you are measuring something is appropriate given the research aims and conclusions of the study. There are many considerations when determining whether the way you measure is ‘appropriate’. These include but are not limited to:

  • whether the way you measure is reliable (for example, will different researchers score a teacher in the same way when using this observation framework?)
  • whether it provides data that accurately represents the outcome (for example, is a student’s score on this twenty-question reading comprehension test an accurate reflection of their reading ability?)
  • whether the way you measure should be used given the consequences (for example, should we rely on this data when deciding whether to ask a student to repeat a year?).