Australia’s national education evidence body

The Australian Education Research Organisation’s (AERO) national snapshot of evidence use and evidence-based practice in schools is based on findings from the AERO evidence use survey, a rapid review of existing literature on evidence use, early findings from interviews with school teachers and leaders and analysis of existing education surveys.

Download methodology (PDF, 760KB)

AERO evidence use survey

The schools snapshot presents findings from the 932 Australian school teachers and leaders who responded to AERO’s survey. The survey had 53 quantitative questions about practitioners’ confidence and attitudes towards evidence, their use of evidence and evidence-based practice, and their school’s workplace supports and culture regarding evidence use. The survey was informed by questions asked in similar surveys in Australia, UK and the USA.

Data was collected between February and December 2021. The survey respondents were broadly representative of Government, Catholic and Independent schools and the population of each state and territory. Having a representative sample means that while the sample size is only a fraction of the total number of school practitioners currently working in Australia, our findings are generalisable to this broader population.

Respondents taught across different school levels: 50% primary, 30% secondary, 12% combined primary and secondary and 4% specialist. They had varying levels of experience, ranging from less than a year to 50 years. Most (69%) were classroom teachers without formal leadership roles, 17% were mid-level or school leaders, 4% held teacher aide or similar roles, and <1% were pre-service teachers. The remaining 10% of respondents either held other school roles, for example, were school counsellors, librarians, or administrators, or did not provide information on their school role.

The national snapshot presents the distribution of responses across questions, and emphasises where differences between teachers and leaders are statistically significant. Where applicable chi-square tests were conducted and p-values were calculated. Correlation analyses were conducted to identify potential relationships between confidence to conduct different types of activities (for example, to assess the quality and rigour of academic research and confidence to apply findings from academic research to change practice).

Rapid review of literature 

Literature used in the national evidence use snapshot was identified through a rapid review process conducted by AERO.

Steps in the rapid review process

The research question answered by this rapid review was:

What is the current state of evidence use and evidence-based practice in schools in Australia?

For this review:

  • “evidence use” is defined as use of research evidence and teacher-generated evidence
  • “evidence-based practices” are defined as 4 of the ‘Tried and Tested’ practices that AERO has identified for school teachers: explicit instruction, formative assessment, mastery learning and classroom management/focused classrooms.
  • “current state” is defined as 2017 to 2022, although evidence of use within the last 10 years (2012 to 2017) was also collected.

The following themes were of interest:

  • perceptions about evidence use
  • confidence/skills in using evidence
  • the practice of evidence use and evidence-based practices, including:
    • who uses evidence
    • what types of evidence are used (including how different types of evidence are used together)
    • the purposes for which evidence is used
    • how often evidence is used
    • what types of evidence-based practice are used (and how these compare with selected international benchmarks)
    • processes around how evidence is used
    • enablers and barriers to evidence use, including at the level of the system, school, individual practitioner, processes and the evidence itself
    • gaps in the existing research on evidence use (including gaps identified by practitioners).

Inclusion criteria

Inclusion and exclusion criteria by population, activity, setting, study design, publication details and outcomes are listed in Table 1.

Table 1: Eligibility criteria 

Where and how did we source the studies? 

Database searches were carried out by AERO. A total of 2,435 studies were retrieved, 2,218 identified through these databases, 19 from grey literature and the remaining 198 publications were already known to AERO. Table 2 and Table 3 present the databases consulted for the review of evidence use and use of evidence-based practices respectively. The databases were selected based on their relevance to the Australian context and availability.

Search results were collated and converted into a standard Excel format. Screening was carried out by 2 AERO researchers, with queries about specific articles decided by the project team.

Table 2 Databases – School evidence use search (search date: March 2022)

Table 3 Databases – School evidence-based practices search (search date: March 2022)

From this initial search, based on the selection criteria, 23 studies were included in the snapshot of evidence use and 40 in the snapshot of evidence‑based practices.

Secondary data analyses


Secondary data analyses drew on Australian data from 4 international education surveys to explore use of evidence-based practices in Australia. These surveys are:

  • Progress in International Reading Literacy Study (PIRLS)
  • Programme for International Student Assessment (PISA)
  • Teaching and Learning International study (TALIS)
  • Trends in International Mathematics and Science Study (TIMSS).

Data was derived from questionnaires provided to either students or teachers across the 4 surveys, resulting in a total of 9 datasets. It is important to note that each questionnaire was:

  • designed to gather data for a particular purpose
  • targeted at a specific cohort of students or teachers
  • implemented at a particular time.

It is important to note that the international surveys were not designed to gather data specifically for this snapshot. Only survey items that aligned most closely with each of the 4 evidence-based practices described above were included in the snapshot. The data presented from international surveys therefore provides an indication of use of these practices, but cannot provide a comprehensive picture, and results may be contextual.

The implications of these considerations are that drawing broad conclusions about all teachers from the data reported would be inaccurate – instead it is important to interpret the data according to the responding student and teacher grade level and domain.

Students responding to the questionnaires were in grade 4 (PIRLS and TIMSS), grade 8 (TIMSS) or were approximately 15 years old (PISA). For ease of reference, we have indicated Grade 10 for PISA student responses but in fact students that participate in PISA come from a variety of grade levels.

Items targeted at students of different ages are worded in language appropriate for that age level. Nevertheless, caution needs to be taken in interpreting data from students – particularly those in grade 4 – since poor reading proficiency can impede their ability to provide a response that is truly reflective of their experience.

Specific cohorts of teachers are asked to respond to each questionnaire. PIRLS teachers are those teaching reading at Grade 4 (in primary school). TIMSS teachers are either those teaching mathematics or science at Grade 4 (in primary school) or those teaching mathematics or science at Grade 8 (in high school). PISA teachers are drawn from across schools and are those likely to be teaching students of the target age group of 15-year‑olds.

In PIRLS, PISA and TIMSS, teacher questionnaire data is collected to help provide a context for the environments in which students are learning. Hence its main focus is on helping to explain student performance in the PIRLS, PISA and TIMSS cognitive assessments.

TALIS is the only study that specifically targets teachers. It asks teachers to report on topics such as professional development, teaching beliefs and practices, and recognition and feedback. TALIS questionnaires are predominantly targeted at teachers in lower secondary education across a range of domains.

Survey respondents

Note for the numbers below: This is an average number of respondents across the questions analysed in this study. Not all teachers and principals answered all the questions.

PISA 2018 student and 2015 teacher questionnaires

Of the 600,000 students worldwide who participated in PISA 2018, 14,273 were Australian students and we have included data from their contextual questionnaires in this project. A questionnaire is also available for teachers, but this is optional. Australia did not deploy the teachers’ questionnaire in 2018 but did in 2015 (and is doing so again in 2022). The data in this report is therefore from the 2015 cycle, in which 95,000 teachers worldwide participated. Of these, 8,989 were from Australia.

TALIS 2018 teacher questionnaire

TALIS 2018 included participants in 48 countries, with questionnaires responded to by more than 260,000 teachers and 15,000 school principals. We included in this snapshot data gathered from the 15,800 teachers and principals in Australia who participated in TALIS 2018.

PIRLS 2016 student and teacher questionnaires

PIRLS 2016 data was collected from 320,000 students and 16,000 teachers in 61 countries. 6,341 Australian students and 1,037 teachers participated in PIRLS 2016.

TIMSS 2019 student and teacher questionnaires

TIMSS 2019 data was collected from 330,000 students and 22,000 teachers in 64 countries. 14,950 Australian students and 900 teachers participated in TIMSS 2019.


Data analysis for this snapshot was conducted by the Australian Council for Education Research (ACER). Both AERO and ACER reviewed the items in each dataset and identified those of relevance to the 4 evidence‑based practices relevant to this project. Some of these were individual items while other comprised scales. Items in scales can be analysed separately but given that they are designed to be combined into scales the insights derived from them are more powerful when the scale is reported as a whole.

Table 4 illustrates the number of items per dataset, per theme. In addition, several items related to teacher or school characteristics (rather than those that focused on pedagogy) were identified and these are shown in the right column.

Table 4 Summary of relevant items from each data set

T=Teacher, S=Student * PISA targets students aged 15.5 to 16.5 who may be in a number of grades – grade 10 is used here as an average but should not be taken to indicate that all responses are from students (or their teachers) in Grade 10.


186 schools were randomly selected using stratified sampling, representing schools across sectors throughout Australia. The principal of each school was sent a recruitment invitation and asked if up to three staff would like to take part. Of the 6 schools who agreed to take part, 12 education practitioners consented to participate in a semi-structured interview. Interviewees included teachers and educational leaders and came from a range of school type (catholic, independent and government, primary and secondary), geographical location (major city, regional and remote) and socioeconomic area (most disadvantaged to least disadvantaged). In a semi‑structured interview, teachers and leaders talked about the meaning of evidence-based practice and teacher-generated evidence, their use of research evidence (including finding and assessing evidence), their trust in research evidence and teacher-generated evidence and their experience of enablers of barriers of evidence use. Interviews were thematically coded and analysed. Findings provide insights into education practitioners perspectives and understandings of using evidence to inform their practice.

Download the full methodology PDF (760KB) for Appendix A: Search terms for the rapid review of literature

Back to top