In a new report by the nonprofit Digital Promise, researchers suggested that outcomes-based contracting (OBC) — a burgeoning procurement model where payment is predicated on how a tool is used and, in some cases, the results it produces — may be able to reframe the efficacy problem. While often marketed to districts as a money-back guarantee, OBC can rather be viewed as a way to ensure mutual accountability for both schools and ed-tech vendors, shifting a school district's focus from passive purchasing to intentional implementation while requiring vendors to remain engaged during implementation. While OBC is often framed around student outcomes, the report focuses more on whether tools are used effectively, suggesting the model may be less about guaranteeing learning gains and more about pushing districts to put stronger implementation practices in place.
Moreover, researchers said that by tying financial stakes to specific goals, outcomes-based contracting forces districts to define exactly who a tool is for and how it will be used, which, they argue, is necessary to make digital tools effectively testable in the first place.
THE SHARED RISK OF IMPLEMENTATION
One of the most significant shifts that comes with OBC, according to the report, is that vendors share in the risk if their products don’t deliver results. In traditional procurement, which often ends once a contract is signed, the risk sits almost entirely with the district after the payment clears.
"Having providers put their money where their mouth is and create the marketplace pressure to be honest about who it's working for and under what conditions is something that lifts the whole,” Andrew Vollavanh, project manager at Digital Promise and co-author of the report, said.
Moreover, the report highlighted that the primary success of OBC may be the infrastructure it establishes. Sierra Noakes, director of ed-tech policy and standards at Digital Promise, also a former special education teacher and co-author of the report, noted that the model builds a "brand-new infrastructure where districts are looking to the data instead of universally procuring a tool for every kid."
This intentionality appears to filter down to the classroom level, as the report said the OBC model encouraged districts to be explicit about why they were buying a tool, who it was for, and what success would look like. Noakes described anecdotal evidence of students — in this case, kindergarteners — who could articulate exactly why they were using a specific tool and what skills they were working to improve.
“There's been no other example I've seen of a procurement model where a 5-year-old is like, ‘I'm using this product because I'm working on these skills,'" Noakes said. "And as a former teacher, it was — I was in tears. I was like, this is everything I could ever want for kids."
BARRIERS TO SCALABILITY
Despite the potential for clearer data and improved engagement, researchers emphasized that OBC’s administrative and logistical workload remains a significant hurdle for districts.
"I don't think there's a world where every ed-tech contract could be an OBC," Noakes said, citing the sheer volume of products on the market and the difficulties of implementation. The report said those challenges are particularly steep in the first year, with one vendor saying they put in five to seven times the human hours for an OBC compared to a standard contract.
Alison Shell, former principal researcher at Digital Promise and a co-author of the report, emphasized that the large amount of data and the specificity required by these contracts can be especially overwhelming for smaller or less-resourced systems.
“Is it feasible for every district to do this? Maybe not, because you have to have that capacity,” she said. “It's not just like, ‘Oh, we're going to do an OBC.’ It’s, ‘Who’s going to talk to the provider? Who’s going to look at the data? Who’s going to make sure the teachers know what they’re doing?’”
REFRAMING THE EFFICACY PROBLEM
The report ultimately argued that while OBC hasn't yet proven ed-tech efficacy, it makes efficacy testable: By forcing districts to define specific student populations and measurable goals, the model removes many of the variables that typically prevent clear evaluation.
The goal, according to Shell, is to filter out the noise of the market.
“It really does matter what the tool is,” she said. “How can we make sure that those are the ones that are being used, and then hopefully the other ones that are just kind of wasting time, you know, fly away maybe, or ... don’t get as much attention.”