Showing Impact in Grantmaking and Community Investment

In the evolving world of philanthropy and social investment, grantmakers are no longer satisfied with anecdotal success stories or vague indicators of change. They are increasingly turning to evaluation as a critical tool to inform decision-making, improve strategies, and deepen community impact. Done well, evaluation goes beyond accountability; it becomes a mechanism for learning, collaboration, and systems change.

This blog explores how evaluations can be effectively integrated into grantmaking processes and leveraged to strengthen community outcomes.

Why Evaluation Matters in Grantmaking

1. It Improves Decision-Making

Evaluations help grantmakers understand what works, for whom, under what conditions. By using formative and summative evaluations, funders can make more informed decisions about what programs to fund, expand, or sunset.

  • Formative evaluations provide early insights into implementation processes and help funders adjust strategies mid-course.

  • Summative evaluations assess the outcomes and impact at the end of a program cycle, offering a comprehensive look at effectiveness.

2. It Advances Equity and Inclusion

When rooted in participatory and culturally responsive methods, evaluation can help surface the voices of historically marginalized communities. These insights allow grantmakers to align funding strategies with the lived realities of those most impacted by social issues, improving both relevance and impact.

3. It Strengthens Learning Cultures

An evaluation-informed grantmaking culture encourages continuous learning among funders, grantees, and community stakeholders. This learning can lead to adaptive practices, better use of resources, and stronger collective action.

Key Strategies for Using Evaluation in Grantmaking

1. Build Evaluation Into the Grantmaking Lifecycle

Evaluation shouldn’t be an afterthought or a box to check. It should be embedded in each stage of the grantmaking cycle:

  • Pre-award: Use evaluations of prior programs or community assessments to identify needs and funding priorities.

  • Award: Require grantees to articulate outcomes and methods for measuring impact.

  • Post-award: Invest in capacity-building so grantees can carry out meaningful evaluations and share findings.

Example: A health foundation might use evaluation data from a pilot mental health program to revise its next RFP (Request for Proposals), ensuring future grantees address specific barriers to care revealed in the pilot.

2. Fund Evaluation as a Core Component of Grants

Many nonprofits lack the internal resources to conduct rigorous evaluations. Funders can support community impact by explicitly funding evaluation activities, including:

  • Data collection and analysis

  • Community-led research

  • Evaluation consultants

  • Training for nonprofit staff

Best Practice: Allow 10–15% of total grant funds to be used for evaluation-related activities, and make this funding flexible.

3. Use Developmental Evaluation for Complex Issues

In complex, evolving contexts—such as systems change or cross-sector initiatives—traditional evaluation methods may fall short. Developmental evaluation is a real-time, adaptive approach that supports innovation by helping stakeholders understand what is emerging, and how to respond.

This is especially useful when:

  • Outcomes are uncertain or nonlinear

  • Stakeholders are co-creating solutions

  • The strategy is evolving in response to community input

Using Evaluation to Measure Community Impact

Community impact goes beyond individual program outcomes. It includes changes at the population, policy, or systems level. Evaluating community impact requires attention to both quantitative data (e.g., indicators of housing stability) and qualitative insights (e.g., resident perceptions of neighborhood safety).

1. Use a Theory of Change or Logic Model

A theory of change helps clarify how specific grant activities are expected to lead to community-level outcomes. It guides both the evaluation design and the interpretation of results.

  • Short-term outcomes: Knowledge, awareness, behavior change

  • Intermediate outcomes: Institutional practices, community norms

  • Long-term outcomes: Population-level improvements, reduced disparities

2. Align on Shared Metrics

When funders, grantees, and community members agree on shared metrics, it improves accountability and facilitates cross-program learning. Common indicators can be tailored to local context but still allow for aggregation and benchmarking.

Example: A regional collective impact initiative may track shared indicators across sectors such as high school graduation, employment rates, or access to affordable housing.

3. Use Mixed Methods to Capture a Full Picture

Community impact is multidimensional and not always visible in numbers alone. Combining surveys, administrative data, focus groups, and storytelling can capture the nuanced effects of grant-funded work.

  • Quantitative data shows patterns and scale.

  • Qualitative data reveals meaning and lived experience.

Engaging Grantees and Communities in the Evaluation Process

A common critique is that evaluations are extractive, done to organizations and communities rather than with them. To be truly useful and ethical, evaluations must be collaborative and grounded in trust.

1. Co-Design Evaluation Plans

Involve grantees and community members in defining evaluation questions, selecting indicators, and interpreting results. This builds ownership and ensures the evaluation reflects community priorities.

2. Share Power and Data

Equitable evaluation includes transparency in how data is collected, analyzed, and used. Funders can:

  • Share dashboards and data reports with communities

  • Invite feedback on findings

  • Use evaluation results to advocate for policy change

3. Celebrate Learning, Not Just Success

Funders should create space for learning from failure and experimentation. This may mean supporting grantees who didn’t meet all outcomes but generated valuable lessons or tried bold new approaches.

Using Evaluation Findings to Inform Future Investments

Finally, the most underutilized aspect of evaluation is acting on the findings. Evaluation should not sit on a shelf. Use it to:

  • Refine future grant strategies

  • Shape public narratives about what works

  • Advocate for systems or policy change

  • Scale promising practices

Example: An education funder may use evaluation findings to support legislative efforts around equitable school funding or to expand high-performing pilot programs to more districts.

Conclusion

Incorporating evaluation into grantmaking is not just about proving impact—it's about improving impact. When approached thoughtfully, evaluations can strengthen nonprofit capacity, inform strategic funding decisions, and advance equitable community outcomes.

To realize this potential, funders must go beyond compliance-oriented models and embrace evaluation as a shared learning journey—one that includes grantees, community stakeholders, and residents as co-creators of knowledge and change.

Content created by Jodie; Blog drafted by AI

Next
Next

Logic Models and Theories of Change: What are they and why are they important?