Welcome to the Curriculum & Leadership Journal website.
To receive our fortnightly Email Alert,
please click on the blue menu item below.
Curriculum & Leadership Journal
An electronic journal for leaders in education
ISSN: 1448-0743
Follow us on twitter

The dilemma of data: using accountability data for school improvement


Over the past decade or so as accountability has become the watchword of education, school leaders, more and more, have to make decisions with confidence and authority. In the past many of the decisions made in schools were based on leaders own professional judgment and knowledge. In the 21st century educational culture, leaders are turning to data to provide context and evidence as they try to make objective decisions which will stand up to community and government scrutiny.

In announcing an increase in federal funding to schools on 11 March 2004, the Federal Minister for Education, Dr Brendan Nelson, tied funding to a series of performance measures based on data schools must make publicly available. These include:

  • academic outcomes and improvements on the previous year
  • what vocational education training options are offered to students
  • school leaver destinations
  • the professional qualifications and professional development undertaken by teachers
  • absentee rates
  • performance against Years 3, 5 and 7 literacy and numeracy benchmarks.
In addition schools must commit to common outcomes testing in the key areas of Mathematics, Science, English and Civics and Citizenship; and expand performance targets to include scientific literacy, ICT literacy, VET in schools and Civics and Citizenship.

The overseas experience has been that data is being used to ensure accountability, and there is nothing to suggest that the same will not occur in Australia. Schools in Australia, however, may be able to learn from overseas successes and failures to ensure that data in this country is used positively to improve student performance.

Earl and Fullan (2003), in their article Using Data in Leadership for Learning, look at three examples of data being used for school reform:

  • The National Literacy and Numeracy Strategies in England
  • The Manitoba School Improvement Program in Manitoba, Canada
  • Secondary School Reform in Ontario, Canada.
In England the National Literacy and National Numeracy Strategies (NLS and NNS) were introduced in 1989 to improve classroom practices and student learning in literacy and numeracy. National targets were set to increase the percentage of 11-year-olds reaching the 'expected level' - Level 4 - in annual national assessments of literacy and numeracy, and OFSTED inspections focused on literacy and numeracy teaching.

Although the Strategies are not statutory, all primary schools in England receive these materials and the majority of schools consider them a high priority. Primary schools implementing the strategies set targets for their progress on the Key Stage Assessments and these are reported publicly. Schools also receive sophisticated data that includes the individual school's performance and assessment report, national summaries, value-added information and national benchmark information.

The Manitoba School Improvement Program (MSIP) involves a relatively small number of non-profit, non-government schools. Schools that receive funding under this program are engaged in school-based improvement at secondary level with the emphasis on the needs of those students at risk. In order to receive funding, schools must commit to producing an annual evaluation report based on school developed data. No data is reported publicly.

In 1997 Secondary School Reform (SSR) was introduced in Ontario, Canada, as part of the Education Quality Improvement Act. The goals of SSR were to improve accountability and effectiveness within the school system and were introduced beside other wide ranging reforms. These included: secondary reduced from five to four years; a new and more challenging curriculum; two differentiated levels of courses; mandatory community service; specified school subject and skill graduation outcomes; prior learning assessment; common report cards; and a mandatory literacy test. In addition, school advisory councils were established, the amount of instructional time in a teacher's day was mandated and an average class size was set (Earl and Fullan, 2003).

Large scale assessment of reading, writing and mathematics began in 1997 in Years 3 and 6 with an arms-length agency of government, the Education Quality and Accountability Office (EQAO), established to collect, evaluate and report information about educational quality. Later, EQAO introduced a mathematics assessment in Year 9 and a reading and writing test in Year 10. The Year 10 Test is a high-stakes graduation requirement for students. Schools receive summary results of student performance in all of these tests and are required to report on them publicly to the community. Results are also published in local newspapers.

Analysis of how the data gathered in the above programs is used shows both positive and negative impacts on schools. Schools in all three jurisdictions reported initial discomfort with using the data. Many school leaders reported that they did not have confidence in either interpreting or using data. Where the type of data gathered was decided by the school (Manitoba), leaders and teachers expressed greater confidence than in those situations where data was derived from external testing. Leaders in Manitoba reported such findings as:

We are collecting data (on the number of students who dropped out during the semester and the final ones who were successful in completing the credit)... A lot of teachers never thought of this at all.

They have no idea that 33% of the students are unsuccessful, more if you consider the drop-outs. In some classes over 50% of these kids either dropped out or didn't pass the course. The teachers are just blown away." (ibid)
In England and Ontario, school leaders were sceptical about the value of data that is used for 'surveillance' by government, particularly when it was used to 'name and blame' schools. In Ontario some principals resented the extra work and time involved in administering mandatory tests. There was also concern in both places that once data was made public, schools lost control of the educational agenda to a certain extent. This was particularly so where the data revealed problems.

When schools received data at the same time as information was publicly released, the problem became more acute. For these schools 'transparency meant vulnerability as school leaders tried to make sense of the data at the same time as they were trying to report it in a reasonable way (Earl and Fullan, 2003).

Principals were aware that data showing students had not met literacy or numeracy standards necessitated some remedial action but they did not always know how to deal with the problems highlighted, did not have enough resources and had to cope with teachers who resented their professionalism being eroded. They also had the problem of trying to explain test results to parents.

In Manitoba, where data was internal and the principal had control of what data to release and when to release it, the above problems did not occur. Despite the difficulties, all jurisdictions reported positive outcomes arising from data collection. In England, school leaders reported increasingly sophisticated use of data when they were moved from the numerical targets required by government, to curriculum targets based on the frameworks for literacy and numeracy.

Schools were able to identify student strengths and weaknesses as revealed by the data and put measures in place to progress teaching and learning. This was particularly successful where support was provided to teachers. Manitoba principals saw data as useful in setting direction and initiatives for their schools. They used data throughout the year to derive the following year's initiatives, and ongoing inquiry and reflection are now part of the culture (ibid).

Brown and Duguid (2000) talk about the importance of capturing knowledge 'without killing it'. Data alone is of little value if something is not done with the information. It is only when information derived from assessment is shared and debated in a social context that it becomes of use.

Fullan (2000) says that teachers and leaders in schools need to 'examine data, make critical sense of it, develop action plans based on the data, take action and monitor progress along the way'.

School leaders need to ask of their teachers: What do we expect children to achieve at our school? How will we know they have achieved it? Then they need to make decisions about the data which demonstrates what is expected and the level at which that expectation is achieved.

Queensland Schools are already rich with data from the Year 2 Net, Years 3, 5 and 7 benchmark testing, the Queensland Core Skills Test, National Maths, English & Science Competitions, in-school testing and so on.

This data can be analysed in a variety of ways:

  • Snapshot (of one measure): how have our Year 7s performed on the 2003 Literacy test
  • Over time (longitudinal): have the characteristics of the student population changed over time? How did our Year 7 cohort perform on the Year 3 and Year 5 tests?
  • Interaction of Measures: is there a difference in how male and females perform? Is there a gender performance difference on particular items? Do students with good school attendance achieve higher on state-wide tests of literacy and numeracy?
Schools also collect a variety of data that measures wider aspects of school life. This is generally in-school data, some of which may be qualitative and collected in surveys of the school community. Other quantitative data aside from academic results is also available to measure school performance.

This includes:

  • bullying and disciplinary statistics
  • incidents of workplace health and safety related incidents
  • student subject choices
  • proportion of budget spent on learning areas
  • percentage of staff time spent on face-to-face teaching
  • level of school resourcing
  • retention rates
  • percentage of students with learning difficulties; and so on.
Schools can use such data to ascertain where student safety is most at risk, make decisions about curriculum offerings; develop budgets; and to decide how to use resources most efficiently. Without data these decisions are likely to be based on perception rather than fact, and more likely to be 'hit and miss'.

A final interesting finding in the research about the use of data is that parents generally welcome reporting based on test data (Figgis and Wildy, 2003). While there are provisos around this bald statement, the fact appears to be that parents want to know 'the truth' about their child's progress and to work with schools to solve problems.

There are a lot of problems with data, particularly where school personnel lack skill, knowledge and understanding in its use and where the type of data collected is not of high quality or accuracy. However, where schools are using data well, the research suggests valuable and useful outcomes for students, teachers and parents.


This article originally appeared in AISQ Briefings May 2004.



Bibliography:

Brown, J. & Duguid, P (2000), 'Balancing act: how to capture knowledge without killing it', Harvard Business Review, 78 (3)
Earl, L. & Fullan, M. (2003), 'Using data in leadership for learning', Cambridge Journal of Education, Volume 33, Number 3, November, Oxfordshire: Carfax Publishing
Figgis, J & Wildy, H. Adding value to numbers: What parents make of national literacy and numeracy testing, National Roundtable Conference, Murdoch University, 2003.
Fullan, M. (2000), 'The return of large-scale reform', Journal of Educational Change, 1(1)
Nelson, B, Media Release - Learning Together through Choice & Opportunity in Queensland, 11 March 2004,

KLA

Subject Headings

Assessment
Canada
Education research
Educational evaluation
Great Britain
Schools
Statistics