
NSVRC was honored to have Dr. Sarah DeGue speak at our RPE Evaluator Community of Practice about lessons learned in evaluating prevention programs. This blog post provides a recap of the highlights from that discussion and is reprinted with permission from the author. You can view the original blog here and learn more about the work of Violence Prevention Solutions and Dr. DeGue. Find additional tools to support your evaluation efforts at www.nsvrc.org/prevention/evaluation-toolkit.
How collecting data along the way gets you to your destination—less violence—faster.
A few weeks ago, I shared lessons from my CDC years with an evaluator group at the National Sexual Violence Resource Center. In the space between this week, I want to share those lessons with you, the people doing prevention in the real world—where staff wear five hats, calendars are full, and “do more with less” feels like the job description.
This work is hard, now more than ever. We need tools that make our work easier and more effective, so the effort you’re already pouring in turns into the outcomes our communities deserve. Program evaluation is one of those tools.
Why program evaluation (especially) now
When resources are tight, it’s tempting to skip evaluation and “just do the work.” The problem: without a feedback loop, we can’t tell if we’re drifting off course—or if we’re quietly getting wins worth doubling down on. Evaluation helps us steer in motion. It protects people and dollars, and it helps us tell a clear story about progress, even when the steps are small.
Think of evaluation as a supportive navigator, not a critic in the back seat. It’s there to help you land the plane/park the car/dock the boat—pick your metaphor!—safely.
What I’ve learned about program evaluation
1) – Not all programs are created equal
Many prevention activities are well-intended, but not all are effective. Assemblies and awareness events, on their own, rarely move behavior and often have short-lived effects on attitudes. They can still play a role—as part of a package that includes strategies with demonstrated impact—but they shouldn’t be the whole plan.
To invest wisely, we need two things:
- Outcome evaluation = does this approach reduce risk or violence in general? (Did we reach the destination?)
- Program evaluation = is our version, with our people, in this setting, working as intended right now? (Are we tuned up, fueled, and on the right route—and do we need snow tires?)
You need both to get safely to the real destination: less violence.
2) – Process data is powerful
We don’t have to wait a year to learn something isn’t landing. Tracking a few basics—fidelity (were core pieces delivered?), reach (who actually got them?), and engagement (did people attend and participate?)—catches fixable issues early.
- If classrooms that miss even a few critical lessons have weaker outcomes, that’s not a program failure—it’s a coverage or scheduling problem you can solve.
- If implementation logs show facilitators struggling with one module, a mid-year tune-up can boost confidence and participant engagement in the second half.
The point: understand what’s working (and what’s not) along the way so you can course-correct.
3) – Context is everything
Program evaluation isn’t just about whether something works—it’s about how and why it works here. Many great programs still need local tailoring. For example, an organization serving Indigenous youth might host listening sessions with youth, elders, and community leaders before adapting an evidence-based curriculum. When the final version feels owned by the community, participation and buy-in increase. When it reflects the local community’s needs and values, it works better.
Adapt delivery to fit your context while protecting the core components that make the program effective. That’s not drift—that’s good practice.
4) – Build it in; don’t bolt it on
When evaluation is treated as an add-on or funder requirement, we lose a lot of learning. Make it part of the program’s DNA from the start. When it’s baked into planning and routines—short check-ins, a simple dashboard, a few clear questions—it becomes how the team learns together.
Design your evaluation plan before launch so partners help choose measures that matter, not just checkboxes. The result: ownership, real-time feedback loops, and shared accountability.
Pro tip: if a survey or metric won’t help you decide a next step, drop it or swap it.
5) – Prevention is a long game—count the stepping stones
Culture shifts slowly. In the meantime, track near-term signals that add up: policy changes, skill gains, safer environments, better implementation, and greater reach. Celebrate those wins. They keep partners engaged and staff hopeful while bigger outcomes accumulate. You’re building the foundation for behavior change—even if it isn’t immediate.
The bottom line for leaders (and readers)
Strong programs need strong learning loops. When evaluation time and tools aren’t funded, practitioners are asked to navigate without instruments. Leaders should budget for a basic set of process and outcome data collection (including at least one behavior or environmental change measure), and dedicated time to review and act on results. That’s not fluff—it’s how we create impact and equity.
