An association) is when there is a relationship between two elements, factors or events, but the association cannot be proved or explained. Associations can be positive (e.g. higher socioeconomic status is associated with higher student achievement) or negative (e.g. higher student absenteeism is associated with lower student achievement).
Causation is when one element, factor or event is known to cause another (e.g. a particular teaching practice is known to lead to improvements in student test scores). To prove causation between two things (let’s call them A and B), researchers need to show: 1. that there is an association between A and B; 2. that A happens before B; and 3. that B is not caused by a third thing (i.e. C or D). In education settings, proving causation is often challenging because of the many influences on teacher and student outcomes.
A comparison group is a group of people in a research study whose responses or outcomes function as a comparison against which the effect of the approach being tested can be measured. Comparison groups receive a different treatment to the group receiving the treatment or approach being tested. There can be any number of comparison groups in a study. A comparison group is called a ‘control group’ when it receives no treatment at all.
Context is the social, cultural and environmental factors found in research settings. Taking context into account in research studies is important because context can affect the outcomes of research (i.e. evidence generated in one context may not necessarily apply to a different context). Evidence is most relevant when it has been generated in a context similar to the context in which it will be applied. Examples of ‘context’ may include location, demographics of research participants, or the level of organisational support for the particular approach being researched.
Data is information that is collected and analysed in order to produce findings and/or to inform decision-making. Data can be qualitative (e.g. teacher observations or quotes from students) or quantitative (e.g. student test scores or attendance data).
An educational approach is effective if it causes (see causation above) a desired change in a particular outcome. This desired change can be an increase in an outcome (e.g. increases in student achievement) or it can be a decrease in an outcome (e.g. reduction in student absenteeism).
Empirical research presents observable data to substantiate its claims. This data may be primary data (i.e. observation and measurement of phenomena or events directly experienced by the researcher) or secondary data (data that has already been collected by other researchers).
Evaluation is the systematic and objective assessment of an approach. Evaluation provides evidence of what has been done well, what could be done better, the extent to which objectives have been achieved and/or the impact of the approach. This evidence can then be used to inform ongoing decision-making regarding the approach.
Evidence is any type of information that supports an assertion, hypothesis or claim. There are many types of evidence in education, including insights drawn from child or student assessments, classroom observations, recommendations from popular education books and findings from research studies and syntheses. AERO refers to two types of evidence in its work:
research evidence: This is academic research, such as causal research or synthesis research, which uses rigorous methods to provide insights into educational practice.
practitioner-generated evidence: This is evidence generated through practitioners in their daily practice (e.g. teacher observations, information gained from formative assessments or insights from student feedback on teacher practice).
Evidence-based practice is an educational approach that is supported by evidence. The approach has been the subject of academic research and there is a broad consensus within the research community that it works.
Evidence-informed practice is an educational approach that is applied using evidence from research together with a practitioner’s professional expertise and judgement. The expertise and judgement used by practitioners can be based on knowledge or understanding of their children and students, or the environment in which they work.
Experimental design is the process of planning an experiment that can establish a ‘cause and effect’ relationship (i.e. an experiment to determine the specific factors that influence an outcome). Experimental designs account for all other factors that could influence an outcome, so the cause of an effect can be isolated.
A history effect is the descriptive term for influences that occur at the same time as an approach is being evaluated and/or influences that occur between the approach being implemented and the outcomes being measured. For example, researchers may want to know the effect of a particular teacher’s writing program on student writing test scores. However, to do this they need to separate the effects of any influences that occur simultaneously (e.g. other teachers using different writing strategies with these students) and/or those that occur in the two weeks between the implementation of the writing program and the writing test (e.g. a whole-school writing celebration).
An approach that is applied to address a problem is sometimes referred to as an intervention; for example, a teacher may implement a certain early literacy intervention to support struggling readers. In a research study, an intervention is the approach that is being investigated, tested or evaluated. An intervention is sometimes called a ‘treatment’ in a research study.
A literature review identifies, evaluates and synthesises the relevant literature within a particular field of research. It usually discusses common and emerging approaches, notable patterns and trends, areas of conflict and controversies, and gaps within the relevant literature. Literature reviews do not usually explicitly state the methods used to identify, evaluate or synthesise the relevant literature.
Maturation effects are the effects in a setting where an approach is applied that occur naturally (i.e. that would have occurred anyway), as opposed to the effects that occur as a result of the approach. For example, researchers may want to know the effect of a particular educational program on student social and emotional skills. However, social and emotional skills develop over time as children mature, and so researchers need to distinguish between the effect of the educational program and the effects of natural development as a result of students getting older.
A meta-analysis uses statistical methods to summarise the results of individual studies. It is designed to assess the behaviours that lead to a particular approach working and/or to provide an estimate of how much more likely one approach is to work over another. It is the quantitative version of a literature review or systematic review.
An outcome measure is an observation that can be used to measure the effect of a particular approach. Outcome measures can be qualitative (e.g. quotes or observations) or quantitative (e.g. test scores). For example, when examining whether a particular approach helps students understand a concept, a teacher could set an assessment. The student assessment score could then be used as an outcome measure of student understanding.
Peer review is the assessment of research by others working in the same or a related field. The assessment is based on the expertise and experience of the researcher undertaking the review and should be impartial and independent.
A pilot study, pilot project or pilot experiment is a small-scale trial that is conducted in order to test the effects of an approach before implementing it on a larger scale. A pilot project can also help to determine feasibility, cost, adverse events and necessary improvements to the approach.
If a study shows that an approach leads to the desired outcome, it is said to have a positive effect. Conversely, if a study shows that an approach has the opposite of the desired outcome, it is said to have a negative effect.
Qualitative methods involve collecting and analysing non-numerical data, and may include observations, interviews, questionnaires, focus groups, and documents and artifact analysis. Qualitative methods can be used to understand concepts, opinions or experiences as well as to gather in-depth insights into a problem or generate new ideas.
Quantitative methods involve collecting and analysing numerical data. Quantitative methods are generally used to find patterns and averages, make predictions, test causal relationships and generalise results to wider populations.
A quasi-experimental design is a research methodology that aims to establish a ‘cause and effect’ relationship (i.e. to determine the specific factors that influence an outcome), but it cannot completely eliminate all factors that could influence an outcome (i.e. there may still be an element of subjectiveness in the findings).
A randomised control trial is a trial of a particular approach that is set up in such as a way that allows researchers to test its effects. In a randomised control trial, subjects are randomly assigned to one of two groups: one receiving the approach) that is being tested (the experimental group), and the other receiving an alternative approach or no approach (the comparison group or control). After the trial period, differences between the groups can be attributed to the approach being tested. Researchers and teachers who use randomisation must take into account ethical concerns, such as whether it is ethical to withhold treatment from subjects in the comparison group.
Relevant evidence is evidence produced in contexts that are similar to one’s own context. Evidence can also be considered relevant when it is derived from a large number of studies conducted over a wide range of contexts.
Research is ‘the creation of new knowledge and/or the use of existing knowledge in a new and creative way so as to generate new concepts, methodologies, inventions and understandings’ (Australian Research Council, 2015). There are many types of research. For example:
exploratory research involves investigating an issue or problem. It aims to better understand this problem and sometimes leads to the formation of hypotheses or theories about the problem.
descriptive research describes a population, situation or event that is being studied. It focuses on developing knowledge about what exists and what is happening.
causal research (also known as ‘evaluative research’) uses experimentation to determine whether a cause-and-effect relationship exists between two or more elements, features or factors.
synthesis research combines, compares and links existing information to provide a summary and/or new insights or information about a given topic.
Research methods are the methods used to conduct research. Research methods are generally classified as ‘qualitative’ or ‘quantitative’. When both methods are used, it is referred to as ‘mixed methods’ research. Qualitative methods involve collecting and analysing non-numerical data (e.g. observations, interviews, questionnaires, focus groups, documents and artifacts). Qualitative methods can be used to understand concepts, opinions or experiences as well as to gather in-depth insights into a problem or generate new ideas. Quantitative methods involve collecting and analysing numerical data. Quantitative methods are generally used to find patterns and averages, make predictions, test causal relationships and generalise results to wider populations.
Evidence is considered rigorous when it proves that a particular approach causes a particular outcome. Rigorous evidence is produced by using specialised research methods that can identify the impact of one particular influence. The most common research method used to produce rigorous evidence is the randomised controlled trial. However, there are many other methods that can produce rigorous evidence, whether qualitative, quantitative or mixed methods. What is important in producing rigorous evidence is that the research method can rule out the effects of as many other influences as possible.
A risk assessment is the process of identifying possible unintended effects or outcomes (i.e. risks), determining the likelihood that they will occur, identifying the potential consequences if they were to occur and then deciding how much it would matter if the unintended effects or outcomes were to occur.
A rubric is a set of criteria that can be used to make consistent judgements. In education settings, a rubric is usually used to help assess learning or development in a particular area. AERO’s Evidence rubric assists education practitioners and policymakers to evaluate the effectiveness of a new or existing policy, program or practice.
When studying a large population, it is not possible to include every individual. Research studies usually include a certain number of individuals to represent the population. Those that are included in the study are referred to as a sample of the population. Sample size refers to the number of people in a sample. Generally, the larger the sample size, the more accurate the research findings. If a sample is too small, it will not provide a fair picture of the whole population.
Selection bias is when the sample in a study does not represent the general population. Selection bias can occur in two ways: 1. when individuals selected in a research study have characteristics that make them different to the general population; or 2. when individuals opt into a research study and have characteristics different to the general population. Selection bias can affect the outcome of a study, as it is possible that any effect detected by the research is due to the specific characteristics of the sample, rather than the approach itself.
A systematic review is an evidence-based (or ‘objective’) approach to a literature review. Systematic reviews answer a precise, clearly defined question to produce evidence to underpin a piece of research. A systematic review must explicitly outline: the methods for data collection, the methods for data extraction, the number of papers included in the review, and the methods for data analysis.
Validation is the process of determining whether the way you are measuring something is appropriate given the research aims and conclusions of the study. There are many considerations when determining whether the way you measure is ‘appropriate’. These include but are not limited to:
whether the way you measure is reliable (e.g. Will different researchers score a teacher in the same way when using this observation framework?)
whether it provides data that accurately represents the outcome (e.g. Is a student’s score on this twenty-question reading comprehension test an accurate reflection of their reading ability?)
whether the way you measure should be used given the consequences (e.g. Should we rely on this data when deciding whether to ask a student to repeat a year?).