Why Funders Are Asking For More Outcome Data, and What to Do About It
A practical guide for executive directors and frontline workers navigating the shift from activity tracking to impact reporting
If your organization has noticed that grant applications now include more questions about outcomes, that reporting templates have grown longer, or that funders are asking what changed for the people you serve rather than how many people walked through the door, you are not imagining it. The expectations around how Canadian nonprofits and social service providers demonstrate their values are shifting, and they are shifting quickly.
This is not a passing trend. It reflects structural changes in how governments and philanthropic funders think about public investment. Understanding what is driving the shift, and responding to it practically, without losing focus on the work itself, is both possible and necessary.
What Is Actually Happening
Across Canada, funders at every level are moving from tracking outputs to requiring evidence of outcomes. An output describes what an organization did: the number of meals served, the workshops delivered, the people who accessed shelter on a given night. An outcome describes what changed as a result: whether food security improved over time, whether participants gained stable employment, or whether individuals moved from crisis into housing.
This distinction may seem semantic, but it carries real consequences for how programs are evaluated, how funding decisions are made, and how the sector communicates its value to the public. The federal government's Social Finance Fund, launched in recent years, now requires organizations receiving investment to report on target outcomes and indicators using the Common Impact Data Standard. Provincial funders in Ontario, Alberta, and British Columbia have introduced or expanded performance accountability frameworks that tie continued funding to demonstrated results. Corporate and philanthropic foundations increasingly ask not just whether programs were delivered, but whether they produced measurable change in the lives of participants.
At the same time, the broader regulatory environment is becoming more complex. New CRA reporting requirements are increasing the compliance burden for many nonprofit organizations, and both funders and the public are expecting clearer links between financial inputs and mission outcomes. These pressures are converging with donor fatigue, workforce constraints, and rising demand for services, creating a difficult operating environment in which the organizations doing the most critical work are also being asked to do the most rigorous reporting.
Why This Shift Is Happening
Three structural forces are driving the increased demand for outcome data, and none of them are likely to reverse.
Fiscal pressure is producing accountability pressure. Across all orders of government, public spending is under heightened scrutiny. When budgets are constrained, decision-makers look for evidence that investments are producing results — rather than simply maintaining existing levels of activity. This is not a judgment on the quality of service delivery. It is a consequence of how public finance operates under sustained fiscal constraint. The same dynamic applies to philanthropic funders experiencing donor fatigue and flat or declining revenue from individual giving.
The rise of social finance is embedding outcome logic into funding structures. Canada's Social Finance Fund, and the broader social innovation ecosystem it supports, is built on the premise that impact can be measured, compared, and used to inform investment decisions. This is a fundamentally different model from traditional grant-making, and it is expanding. Organizations that operate within or adjacent to social finance structures will encounter outcome reporting not as an optional enhancement but as a condition of participation.
Data infrastructure across the sector is improving, which raises the floor of what is considered reasonable to ask. As more organizations adopt digital tools for case management, intake, and service tracking, funders are recalibrating their expectations. What was once difficult to collect, longitudinal participant data, follow-up outcomes, and inter-agency referral tracking, is becoming feasible for a growing number of providers. In practice, the availability of new data tools means that funders now expect more, because more is technically possible.
Taken together, these forces indicate that the demand for outcome data is not a temporary reporting fad. It is a structural feature of the emerging funding landscape in Canada.
What Funders Are Really Asking For (and What They Are Not)
One of the most common sources of anxiety for service providers is the assumption that funders are asking for academic-quality evaluation research. In most cases, they are not.
What funders are asking for can be summarized in a few practical categories. They want organizations to articulate their intended change, to describe, clearly and specifically, what they expect to be different for the people they serve as a result of the program. They want a small number of meaningful indicators that track whether that change is occurring. They want consistent data collection over time, so that progress can be assessed against a baseline rather than reported as a one-time snapshot. And they want honest reporting that includes what is not working, not just what is.
What funders are generally not asking for is randomized controlled trials, external evaluation for every program, or complex statistical analysis. The gap between what is expected and what providers fear is expected is often wider than the gap between what providers are already doing and what they would need to do.
This is an important distinction. Many organizations already collect the raw ingredients of outcome data, intake assessments, case notes, follow-up check-ins, and exit surveys, without recognizing that, with modest restructuring, these practices could form the basis of a credible outcome reporting system.
What This Means for Executive Directors
For organizational leaders, the shift toward outcome measurement carries implications for strategy, resource allocation, and funder relationships.
Treat outcome measurement as a management tool, not a compliance exercise. Organizations that approach outcome data purely as something funders require tend to experience it as a burden. Organizations that use the same data to inform program decisions, identify what is working, and communicate their story to boards and communities tend to experience it as an asset. The difference is largely one of orientation, though it does require investment in staff capacity and data systems.
Negotiate the terms of measurement with funders, rather than accepting them passively. Many funders are open to conversation about what constitutes a meaningful indicator for a given program. The Tamarack Institute's open letter to Canadian funders, signed by numerous sector organizations including the Common Approach to Impact Measurement, called explicitly for reporting practices that establish reciprocal data relationships with grantees, rather than top-down imposition of metrics. Where funder requirements feel disconnected from the realities of service delivery, there is often more room for dialogue than providers assume.
Invest in data literacy as organizational infrastructure. This does not mean hiring a data analyst, though, for larger organizations that may be appropriate. It means building the capacity of program staff to understand why data is collected, how it connects to the organization's theory of change, and what the data reveals about program performance. When frontline workers understand the purpose of the data they collect, the quality of that data improves and so does its usefulness for both internal learning and external reporting.
What This Means for Frontline Workers
For the people delivering services, the shift toward outcome data can feel like one more administrative demand layered onto already demanding work. That concern is legitimate. At the same time, frontline workers are often the people closest to the evidence of whether programs are producing change, and their perspective is essential to building measurement systems that reflect reality rather than bureaucratic abstraction.
The data you already collect may be more valuable than you think. Intake forms, case notes, referral records, and follow-up calls all contain information about whether participants' circumstances are changing. The challenge is not usually that this information does not exist. It is that it is not structured in a way that allows it to be aggregated and reported. Collaborating with program leadership to identify which existing data points connect to outcomes and which collection practices could be adjusted to capture that connection more clearly is a practical starting point that does not require adding significant new tasks.
Outcome data is most credible when it reflects the nuance of service delivery. Funders are increasingly aware that not all outcomes are positive, that progress is rarely linear, and that context matters. Frontline workers who can articulate what conditions support positive outcomes, what barriers participants face, and what early indicators suggest a program is or is not on track provide a form of evidence that quantitative data alone cannot capture. This qualitative insight is not a substitute for measurement, but it is a necessary complement to it.
Your voice matters in defining what "success" looks like. When organizations develop their theories of change and select their outcome indicators, the perspective of frontline staff should inform those decisions. A program that defines success exclusively in terms that make sense to funders but not to the people delivering or receiving services is unlikely to produce data that is either accurate or useful.
Practical Steps for Getting Started
For organizations that have not yet developed a structured approach to outcome measurement, the following sequence is designed to be achievable without specialized expertise or significant new investment.
Start with your theory of change. Before selecting indicators or building data systems, articulate the logic that connects your program's activities to the outcomes you intend to produce. This does not need to be a formal document. It can begin as a conversation among staff: if our program works as intended, what will be different for participants? Working backward from that question clarifies what to measure and why.
Identify two to three outcome indicators per program. Resist the temptation to measure everything. A small number of well-chosen indicators, collected consistently, will produce more useful data than a large number of indicators collected inconsistently. The Common Approach to Impact Measurement, developed through a Canadian community of practice, offers a set of flexible standards designed to help organizations choose indicators that are meaningful to their own work while remaining compatible with funder reporting requirements.
Build measurement into existing workflows. The most sustainable data collection practices are those embedded in the work staff are already doing, rather than added as separate tasks. If intake workers already complete an assessment at entry, adding a follow-up assessment at a defined interval creates the basis for measuring change over time. If case managers already document service plans, structuring those plans around intended outcomes creates a natural link between service delivery and outcome tracking.
Use what you learn. Data that is collected but never reviewed produces compliance without insight. Build regular opportunities, even brief ones, for staff to review outcome data together, discuss what the patterns suggest, and consider whether program adjustments are warranted. This practice strengthens the quality of future data collection, builds staff investment in the measurement process, and generates the kind of learning narrative that funders increasingly value.
Communicate honestly with funders about your capacity. If outcome reporting requirements exceed your organization's current capacity, say so, and propose a realistic path forward. Most funders prefer a credible plan for building measurement capacity over fabricated compliance with requirements that an organization cannot yet meet. This kind of transparency strengthens, rather than undermines, the funder relationship.
The Opportunity Within the Pressure
The demand for outcome data is real, and it is not going away. For many organizations, responding to it will require changes to how data is collected, managed, and used, changes that take time, investment, and sustained attention.
At the same time, the organizations that develop strong outcome measurement practices are building something more durable than funder compliance. They are building the capacity to understand their own impact, to make better program decisions, to communicate their value with confidence, and to sustain their work through funding cycles that will inevitably shift again.
The gap between what funders are asking for and what service providers can deliver is often narrower than it appears. Closing that gap is not a matter of becoming a research institution. It is a matter of approaching measurement as a practical discipline, grounded in the realities of service delivery, informed by the expertise of frontline staff, and designed to serve the dual purposes of accountability and learning.