While technology, such as 1:1 laptop programs, can help students become proficient in core competencies, nothing beats problem-solving, critical thinking and creativity.
Each year school districts spend considerable money, time and effort on purchasing and implementing technology in their schools. Often this is done with promises and hope that the technology will not only improve student learning and engagement, but that it will also raise students’ scores on state tests. However, to date, there’s been little evidence that educational technology tools and resources do actually raise students’ scores on these high-stakes exams. Regardless, some educators continue to pursue the test score holy grail, while ed tech companies offer assurances based on slim research that they have the solutions.
All of this raises some key questions: Are schools being duped by ed tech vendors? Are districts leading their taxpayers astray with promises that technology will raise students’ scores? If implemented by well-trained teachers, can ed tech tools possibly improve students’ scores, but is it too early in the transition process to know for sure? Are the skills assessed on state tests incongruous to those developed through ed tech? Or, is “can ed tech improve students’ test scores?” the wrong question altogether?
The answers to these questions aren’t mutually exclusive, but I subscribe to the belief that “improving students’ state test scores” is the wrong problem for schools to be addressing when they consider how ed tech can help advance student learning.
If used with fidelity over time, certain ed tech tools may well improve student test scores, and a meta-analysis of research, primarily focused on 1:1 laptop programs, is showing results. But this research summary also tellingly notes that the students involved in the various studies are engaged more deeply in a variety of non-tested skills, such as authentic research and writing processes, problem-solving opportunities, and communication.
The challenges school districts face in linking their technology programs to student test score gains is exemplified by the Mooresville, N.C., school district. In 2011, Mooresville was widely lauded for how its 1:1 laptop program showed significant student gains on their state tests for reading, math and science. School districts throughout the country lined up to learn from and replicate Mooresville’s program. But a report from a recently completed five-year study of Mooresville’s 1:1 laptop implementation shows the student gains have leveled considerably over time. And the report also casts some doubt on the initial data used to trumpet Mooresville’s early success.
So, if students’ performances on state tests are the wrong measures for determining the success of schools’ ed tech initiatives, how should ed tech be evaluated?
First, it’s important to understand that achieving success with ed tech is an ongoing process that, like learning, doesn’t have an end point — it’s a matter of continuous improvement and growth over time. On the infrastructure side, classroom technologies will become outdated and need to be replaced. Networks will require regular upgrades. Personnel in all capacities — district and office staff, administrators, and especially teachers — must have thorough, ongoing support as they transition to an increasingly digital culture. All of these advances should be considered for how they’re impacting rigorous and personalized student learning.
There are evaluation processes schools can use to measure how particular ed tech applications are impacting students’ skill development. But the primary areas where educators should focus their attention in evaluating students’ gains are in the important skills not assessed on state tests: Problem-solving and critical thinking, collaboration, communication, creativity, and an increase in students’ self-directed learning. If ed tech tools are helping students become proficient in these new core competencies, then any additional improvements in student test scores will be a bonus.