Anthropologists and Sociologists in Program Evaluation
In today’s data-driven nonprofit, government, and philanthropic landscapes, the demand for meaningful program evaluation is higher than ever. Stakeholders seek more than just numbers—they want stories, context, and actionable insights. While many organizations rely on data analysts and public policy experts to evaluate program success, they often overlook a powerful talent pool: anthropologists and sociologists.
These social scientists bring a distinctive combination of methodological rigor, cultural insight, systems thinking, and community-centered analysis that makes them uniquely valuable to program evaluation teams. Whether assessing an early childhood intervention program, a housing policy, or a community grantmaking initiative, anthropologists and sociologists can uncover not just whether a program works, but how and why it works (or doesn’t).
The list of reasons to engage anthropologists and sociologists in your next program evaluation is long; however, here are some key reasons:
1. Deep Understanding of Context and Culture
Anthropologists and sociologists are trained to study people in their real-world contexts. They understand that human behavior cannot be separated from culture, history, power structures, or environment.
This matters in evaluation because program outcomes are often influenced by factors that are invisible in a purely statistical analysis. For instance, a community health initiative may show modest improvement in diabetes rates, but an anthropologist might reveal that deeper issues—like intergenerational trauma, neighborhood disinvestment, or mistrust in medical institutions—are affecting health outcomes. These insights can help funders and implementers refine their strategies in more culturally responsive and equitable ways.
2. Qualitative Methods Expertise
While surveys and administrative data provide breadth, qualitative methods offer depth, and this is where anthropologists and sociologists shine.
Trained in ethnographic research, in-depth interviews, focus groups, and participatory observation, these social scientists can uncover rich, nuanced data that answers critical questions like:
How do participants experience the program?
What barriers are they facing?
What do success and impact look like from their point of view?
Qualitative findings help humanize the data, explain surprising trends, and build the kind of narratives that funders and boards increasingly demand in their impact storytelling.
3. Systems Thinking and Structural Awareness
Sociologists, in particular, are trained to analyze how broader systems—like race, class, gender, and policy—shape individual and group outcomes. This macro-level lens is critical for organizations working on complex social issues such as homelessness, education equity, or criminal justice reform.
Rather than evaluating a program in isolation, sociologists ask:
How does this program interact with other systems (e.g., schools, housing markets, or immigration policy)?
Are there structural barriers limiting program success?
Is the intervention unintentionally reinforcing inequalities?
This kind of analysis is essential for evaluations aimed at systems change or equity-focused impact.
4. Community-Centered Approaches
Many anthropologists and sociologists are trained in participatory and decolonizing methods, which align with growing sector-wide commitments to community engagement, equity, and inclusion. They understand that communities are not just subjects of research but co-creators of knowledge.
This orientation enhances evaluation efforts in several ways:
Trust-building: Social scientists often have experience conducting research in marginalized or historically excluded communities with humility and respect.
Co-design: They know how to work alongside community members to co-create evaluation questions, tools, and interpretations.
Empowerment: Participatory approaches not only generate better data—they also build local capacity and democratize the learning process.
If you are interested in learning more about how to do this, join my upcoming webinar! The date will be announced shortly! Connect with me on LinkedIn to stay up to date with events and training opportunities.
5. Critical Thinking and Reflexivity
Both disciplines emphasize reflexivity, a practice of constantly examining one’s own biases, assumptions, and position in the research process. This is especially important in evaluation, where power dynamics between evaluators and communities can influence what is studied, how data is interpreted, and whose voices are amplified.
By being self-aware and critical of dominant narratives, anthropologists and sociologists help ensure that evaluations are not only methodologically sound but also ethically grounded.
6. Storytelling and Meaning-Making
Data only becomes useful when it is interpreted, contextualized, and communicated clearly. Anthropologists and sociologists are trained storytellers and theorists—they excel at turning data into meaningful insights.
Whether writing reports, presenting to funders, or facilitating community learning sessions, they can:
Connect the dots between numbers and narratives
Translate findings into plain language
Provide historical and cultural context for understanding program outcomes
Their communication skills make evaluation findings more compelling, accessible, and actionable.
7. Experience with Complexity and Adaptation
Programs are rarely linear, and outcomes often unfold in unpredictable ways. Anthropologists and sociologists are comfortable with complexity, nuance, and ambiguity. Rather than forcing clear-cut answers, they are skilled at navigating gray areas and identifying unintended consequences or emerging patterns.
In developmental or formative evaluations, where the goal is real-time learning rather than judgment, this mindset is invaluable.
8. Ethical Commitment to Justice and Voice
Many social scientists enter the field out of a commitment to social justice. They are attuned to power imbalances, historical harm, and the ethical implications of research and evaluation.
In evaluation settings, this often translates into:
Asking hard questions about who benefits from a program
Centering the voices of people most affected
Advocating for data sovereignty and community ownership of findings
This ethical compass makes them especially well-suited to work in organizations that aim to disrupt inequality, not reproduce it.
Final Thoughts
Hiring an anthropologist or sociologist for a program evaluation role is not just a nice-to-have—it’s a smart, strategic decision for organizations that are serious about equity, effectiveness, and impact.
These professionals bring a rare blend of technical skill, cultural literacy, systems awareness, and deep empathy that is urgently needed in today’s complex social landscape. Whether embedded in a nonprofit team, partnering with a foundation, or consulting for a government agency, anthropologists and sociologists offer more than measurement—they offer meaning.
As the social sector continues to evolve, so too should our approach to evaluation. Let’s recognize the unique value that social scientists bring—and make space for them to lead.
If your organization is looking for program evaluators who can engage communities, tell powerful stories, and drive systems change, reach out to Jodie, your favorite social scientist with a PhD in anthropology and sociology, at Jodie@ChangeAmplifiers.Com
*Content provided by Jodie. Blog drafted in partnership with AI*
Showing Impact in Grantmaking and Community Investment
In the evolving world of philanthropy and social investment, grantmakers are no longer satisfied with anecdotal success stories or vague indicators of change. They are increasingly turning to evaluation as a critical tool to inform decision-making, improve strategies, and deepen community impact. Done well, evaluation goes beyond accountability; it becomes a mechanism for learning, collaboration, and systems change.
This blog explores how evaluations can be effectively integrated into grantmaking processes and leveraged to strengthen community outcomes.
Why Evaluation Matters in Grantmaking
1. It Improves Decision-Making
Evaluations help grantmakers understand what works, for whom, under what conditions. By using formative and summative evaluations, funders can make more informed decisions about what programs to fund, expand, or sunset.
Formative evaluations provide early insights into implementation processes and help funders adjust strategies mid-course.
Summative evaluations assess the outcomes and impact at the end of a program cycle, offering a comprehensive look at effectiveness.
2. It Advances Equity and Inclusion
When rooted in participatory and culturally responsive methods, evaluation can help surface the voices of historically marginalized communities. These insights allow grantmakers to align funding strategies with the lived realities of those most impacted by social issues, improving both relevance and impact.
3. It Strengthens Learning Cultures
An evaluation-informed grantmaking culture encourages continuous learning among funders, grantees, and community stakeholders. This learning can lead to adaptive practices, better use of resources, and stronger collective action.
Key Strategies for Using Evaluation in Grantmaking
1. Build Evaluation Into the Grantmaking Lifecycle
Evaluation shouldn’t be an afterthought or a box to check. It should be embedded in each stage of the grantmaking cycle:
Pre-award: Use evaluations of prior programs or community assessments to identify needs and funding priorities.
Award: Require grantees to articulate outcomes and methods for measuring impact.
Post-award: Invest in capacity-building so grantees can carry out meaningful evaluations and share findings.
Example: A health foundation might use evaluation data from a pilot mental health program to revise its next RFP (Request for Proposals), ensuring future grantees address specific barriers to care revealed in the pilot.
2. Fund Evaluation as a Core Component of Grants
Many nonprofits lack the internal resources to conduct rigorous evaluations. Funders can support community impact by explicitly funding evaluation activities, including:
Data collection and analysis
Community-led research
Evaluation consultants
Training for nonprofit staff
Best Practice: Allow 10–15% of total grant funds to be used for evaluation-related activities, and make this funding flexible.
3. Use Developmental Evaluation for Complex Issues
In complex, evolving contexts—such as systems change or cross-sector initiatives—traditional evaluation methods may fall short. Developmental evaluation is a real-time, adaptive approach that supports innovation by helping stakeholders understand what is emerging, and how to respond.
This is especially useful when:
Outcomes are uncertain or nonlinear
Stakeholders are co-creating solutions
The strategy is evolving in response to community input
Using Evaluation to Measure Community Impact
Community impact goes beyond individual program outcomes. It includes changes at the population, policy, or systems level. Evaluating community impact requires attention to both quantitative data (e.g., indicators of housing stability) and qualitative insights (e.g., resident perceptions of neighborhood safety).
1. Use a Theory of Change or Logic Model
A theory of change helps clarify how specific grant activities are expected to lead to community-level outcomes. It guides both the evaluation design and the interpretation of results.
Short-term outcomes: Knowledge, awareness, behavior change
Intermediate outcomes: Institutional practices, community norms
Long-term outcomes: Population-level improvements, reduced disparities
2. Align on Shared Metrics
When funders, grantees, and community members agree on shared metrics, it improves accountability and facilitates cross-program learning. Common indicators can be tailored to local context but still allow for aggregation and benchmarking.
Example: A regional collective impact initiative may track shared indicators across sectors such as high school graduation, employment rates, or access to affordable housing.
3. Use Mixed Methods to Capture a Full Picture
Community impact is multidimensional and not always visible in numbers alone. Combining surveys, administrative data, focus groups, and storytelling can capture the nuanced effects of grant-funded work.
Quantitative data shows patterns and scale.
Qualitative data reveals meaning and lived experience.
Engaging Grantees and Communities in the Evaluation Process
A common critique is that evaluations are extractive, done to organizations and communities rather than with them. To be truly useful and ethical, evaluations must be collaborative and grounded in trust.
1. Co-Design Evaluation Plans
Involve grantees and community members in defining evaluation questions, selecting indicators, and interpreting results. This builds ownership and ensures the evaluation reflects community priorities.
2. Share Power and Data
Equitable evaluation includes transparency in how data is collected, analyzed, and used. Funders can:
Share dashboards and data reports with communities
Invite feedback on findings
Use evaluation results to advocate for policy change
3. Celebrate Learning, Not Just Success
Funders should create space for learning from failure and experimentation. This may mean supporting grantees who didn’t meet all outcomes but generated valuable lessons or tried bold new approaches.
Using Evaluation Findings to Inform Future Investments
Finally, the most underutilized aspect of evaluation is acting on the findings. Evaluation should not sit on a shelf. Use it to:
Refine future grant strategies
Shape public narratives about what works
Advocate for systems or policy change
Scale promising practices
Example: An education funder may use evaluation findings to support legislative efforts around equitable school funding or to expand high-performing pilot programs to more districts.
Conclusion
Incorporating evaluation into grantmaking is not just about proving impact—it's about improving impact. When approached thoughtfully, evaluations can strengthen nonprofit capacity, inform strategic funding decisions, and advance equitable community outcomes.
To realize this potential, funders must go beyond compliance-oriented models and embrace evaluation as a shared learning journey—one that includes grantees, community stakeholders, and residents as co-creators of knowledge and change.
Content created by Jodie; Blog drafted by AI
Logic Models and Theories of Change: What are they and why are they important?
In the nonprofit and public sectors, designing and implementing effective programs requires more than good intentions—it requires strategic thinking, clear planning, and mechanisms for accountability and learning. Two critical tools in this effort are the logic model and theory of change. While often used interchangeably, they serve distinct but complementary functions in program development, implementation, and evaluation.
Logic Model vs. Theory of Change: What’s the Difference?
At a glance, a logic model is a visual representation that outlines the resources, activities, outputs, and intended outcomes of a program. It illustrates a linear pathway from inputs to impact, helping stakeholders see how program elements connect.
In contrast, a theory of change (ToC) is a more comprehensive, narrative-driven framework that articulates the underlying assumptions about how and why a program will bring about change. It explains the causal mechanisms and contextual factors that shape program success, often incorporating external influences, systems-level considerations, and long-term vision.
Think of the logic model as a program's "roadmap" and the theory of change as the "compass" guiding why that road was chosen and what changes are expected over time.
Why Logic Models and Theories of Change Matter
Both tools play a foundational role throughout the lifecycle of a program:
1. Design
During the program design phase, a theory of change helps organizations clarify their goals and ensure that the proposed activities are grounded in a logical and evidence-informed pathway. It pushes teams to answer questions like:
What problem are we addressing?
What conditions need to change?
How will our actions influence these conditions?
A logic model then translates this strategic thinking into a more structured plan. It identifies specific resources (inputs), planned activities, expected outputs (products or services delivered), and intended outcomes (short-, medium-, and long-term changes).
This pairing ensures that the program design is both aspirational and actionable.
2. Implementation
In implementation, the logic model serves as a valuable management tool. It helps program staff stay aligned with planned activities and deliverables and provides a framework for tracking progress.
Meanwhile, the theory of change helps interpret what’s happening. If implementation deviates from the plan or outcomes aren’t as expected, revisiting the theory of change can illuminate whether foundational assumptions were incorrect or if contextual conditions have shifted.
3. Evaluation
For evaluation purposes, these tools provide essential scaffolding. The logic model informs performance indicators and data collection aligned with each stage of the program. The logic model provides the binary response of “Does this program work?”
The theory of change helps evaluators assess not only whether the program achieved its outcomes but why it did or didn’t.
Evaluators can test the assumptions embedded in the theory of change, assess whether causal linkages hold true, and provide insights into how the program might be refined or scaled.
How to Design a Logic Model and Theory of Change
While both tools are distinct, designing them should be an iterative and collaborative process that engages program designers, frontline staff, evaluators, and—ideally—participants.
Designing a Theory of Change
Define the long-term goal: What is the desired social impact or condition you aim to influence? It is crucial that the community is involved in defining the long-term goal. Without the community’s input, you risk solving a problem that does not need solving while sacrificing something that really does need solving.
Map the preconditions: What changes must occur for this goal to be realized? These may include shifts in knowledge, behavior, systems, or relationships. Community involvement here is also important. The community knows what needs to change for them to thrive. Simply asking the community, “What does your ideal day look like?” will illuminate what needs to change in the community.
Identify interventions: What activities or strategies will create those preconditions? Similarly, the community tends to know what activities and strategies will be most helpful to them. Using the data provided from the above question, the community can co-design activities and strategies that would best support them in achieving their ideal day.
Articulate assumptions: What beliefs or evidence underlie the expected causal pathways? We all come with beliefs and assumptions. It is important to note these up front, as they can either help or hinder the success of the plan. For example, when attempting to achieve the outcome of reducing poverty, you may design a financial literacy class. An assumption behind this is that the people participating in the program do not know how to manage their money, and that they earn enough money to need help managing. The community can oftentimes call out our assumptions in ways that we may not see ourselves. However, it is important to note that the community also has assumptions that need to be highlighted.
Consider context: What external factors (e.g., policies, funding, social dynamics) might support or hinder progress?
A strong theory of change makes explicit what is often implicit. It encourages rigorous discussion about the “why” behind the program and highlights potential risks or blind spots.
Designing a Logic Model
A typical logic model includes five components:
Inputs: Resources such as staff, funding, partnerships, and materials.
Activities: What the program will do (e.g., training sessions, counseling, community events).
Outputs: Direct products of activities (e.g., number of workshops held, participants served).
Outcomes: Changes expected in participants or systems, categorized as short-, medium-, or long-term.
Impact: The broader systemic or population-level change aligned with the program’s mission.
Logic models can be developed as simple charts or more complex diagrams with feedback loops, depending on the program’s complexity.
Using These Tools to Guide Adaptation and Improvement
Programs operate in dynamic environments, and learning to adapt based on data and reflection is essential for long-term effectiveness. Logic models and theories of change provide structured ways to understand when and how to make program changes.
When to Revisit the Logic Model
Outputs are not being met: This may suggest issues in resource allocation, staffing, or implementation fidelity.
Unanticipated activities emerge: Programs may expand organically. Updating the logic model ensures it still represents reality.
Data reveals unintended consequences: Adjustments may be needed in the program delivery or scale.
When to Revisit the Theory of Change
Assumptions don’t hold: If outcomes aren’t achieved despite high-quality implementation, the underlying causal logic may need revisiting.
Context has changed: Shifts in policy, funding, or community needs may require a rethinking of how change can happen.
Participant feedback challenges initial hypotheses: Lived experience often provides critical insights that call for recalibrating the theory of change.
Using Both for Continuous Improvement
Together, these tools allow for adaptive management. For example, if a workforce development program is not achieving job placements despite strong participation, the logic model might help isolate a gap in employer engagement activities. The theory of change might reveal that the assumption “training leads directly to employment” overlooks systemic hiring biases. Armed with this insight, the program could introduce employer education or policy advocacy to address the barrier.
By regularly revisiting both the logic model and theory of change, program leaders can ensure that implementation remains aligned with strategy and that strategy evolves in response to learning.
Conclusion
Logic models and theories of change are not just compliance tools for funders—they are essential instruments for strategic clarity, operational discipline, and evaluative insight. When developed thoughtfully and used iteratively, they foster learning cultures that embrace complexity, pursue evidence, and adapt to achieve meaningful impact. Whether designing a new initiative or refining a long-standing program, investing time in these frameworks is a wise and necessary step toward lasting change.
Reach out to Jodie with Change Amplifiers to co-design your theories of change and logic models at Jodie@ChangeAmplifiers.Com
*Content developed by Jodie; blog drafted with the assistance of AI
So many types of evaluations - where do I start?
Program evaluation
“Evaluation is a process that critically examines a program. It involves collecting and analyzing information about a program's activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions.” (Patton, 1987)
Evaluation (well-designed and executed) helps us to make informed decisions. While both research and evaluation involve systematic inquiry, they differ in terms of their purpose, timing, generalizability, stakeholder involvement, and the use of findings.
Different types of evaluation
There are different types of evaluation such as Formative Evaluation, Summative Evaluation, Developmental Evaluation, Economic evaluation, etc. However, two common types of Evaluation frameworks are Formative evaluation and Summative evaluation. This figure is a visualization of formative and summative evaluation.
Formative Evaluation (Process/ Implementation Evaluation)
Formative evaluations can be done during program development and implementation of new programs. Formative evaluation ensures that a program or program activity is feasible, appropriate, and acceptable.
By using a Formative Evaluation, we focus on
Reliability: Were the program activities actually delivered?
Quality: How can activities/processes be improved?
Integrity: Are we doing what we think we’re doing? What are the strengths/weaknesses of daily activities? is the program complete, are missing an activity?
Efficiency: Are the timelines acceptable? Can we improve timelines and processes? Are there any context-related factors affecting performance? (things out of program’s direct control)
Summative (Outcome Evaluation)
Summative evaluations are completed once your programs are well established. It will tell you to what extent, the program is achieving its intended outcomes and will tell you should the Program be continued?
By using a Summative Evaluation, we focus on
Benefit-Cost: Is the program effective? Is it the best use of resources?
Effectiveness: what changes were made? did we meet benchmarks?
Efficiency: Did the program change behaviour as well as expected? If not, why not?
Evaluating Everyday Activities:
An example, the image below illustrates an evaluation of everyday activities, such as baking a birthday cake. This represents a comprehensive program evaluation, incorporating both formative evaluation (assessing input, process, and outcomes) and summative evaluation (focusing on the outcome).
Note: The best practice in program evaluation is to do a comprehensive evaluation and do Summative + Formative evaluation.
Questions: Why do we need both? To evaluate both process and outcomes
Developmental evaluation (DE)
This evaluation framework is based on systems thinking and facilitates innovation by gathering and analyzing real-time data to support informed, continuous decision-making throughout the design, development, and implementation process. This approach is especially useful for innovations where the path to success is uncertain. (Patton, 2010). By examining how a new approach unfolds, DE can help address questions such as:
What is emerging as the innovation takes shape?
What do initial results reveal about expected progress?
What variations in effects are we seeing?
How have different values, perspectives, and relationships influenced innovation and its outcomes?
How is the larger system or environment responding to the innovation?
Economic Evaluation:
This evaluation framework is a valuable tool that enables users to maximize resources, evaluate promising program options, and showcase the advantages of their program (WHO, 2023). Here are some questions to think about:
How do you know you’re making the most of your limited resources?
How do you decide between two promising program options when you can only afford one?
How do you demonstrate to decision-makers that the benefits of your program are worth the costs?
There are different types of Economic Evaluation such as:
Cost-Minimization Analysis (CMA): Compares costs of interventions that have already been proven to have equivalent outcomes.
Cost-Effectiveness Analysis (CEA): Compares costs relative to a single, natural unit of outcome (e.g., life-years gained, cases prevented).
Cost-Utility Analysis (CUA): A special form of CEA that uses quality-adjusted life years (QALYs) or disability-adjusted life years (DALYs) to compare interventions.
Cost-Benefit Analysis (CBA): Converts both costs and benefits into monetary terms to compare net benefits.
Cost-Consequence Analysis (CCA): Lists various costs and outcomes without aggregating them into a single measure, allowing decision-makers to weigh trade-offs.
Reference:
Patton, Michael Quinn. (2010). Developmental Evaluation: Applying complexity concepts to enhance innovation and use (Guilford Press, 2010).
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). SAGE Publications.
WHO. (2023). Introduction to economic evaluation. Teaching workshop on a national program. Feb 2023.
Drummond, M. F., Sculpher, M. J., Claxton, K., Stoddart, G. L., & Torrance, G. W. (2015). Methods for the Economic Evaluation of Health Care Programmes (4th ed.). Oxford University Press
A little about the author:
I am Mandana Karimi, a Sociology Instructor at Capilano University and an Evaluation Specialist at Fraser Health Authority. I hold a Ph.D. in Political Sociology, an M.A. in Sociology, and a B.A. in Social Planning. My research and professional interests include Political Sociology, Environmental Sociology, Social Health, Policy Analysis, Critical Sociology, Ethnography, and Mixed Methods Research.
Introduction to Needs Assessments
“Our program is meeting its stated goals, but our clients are still struggling!”
Have you ever experienced this phenomenon? Your program works as designed – your clients are experiencing the outcomes you’ve decided were important to measure. However, your clients are still struggling.
Don’t worry. You are not alone!
Needs assessments are critical tools in the toolbelt of any nonprofit or government organization that is responsible for designing programs that serve the needs of a group of people.
A needs assessment is an evaluation tool that helps fill gaps between what is currently happening and your desired outcome. For example, let’s say you manage a nonprofit that serves individuals experiencing mental illness. Your typical program consists of offering free counseling to those who need it. Your clients attend counseling sessions every week and meet their socioemotional outcomes; however, they still cannot maintain a living wage job, so they struggle with maintaining safe and stable housing and obtaining groceries every week.
Currently happening: counseling.
Desired outcome: to live independently.
How do we know what we are missing? It is clear that counseling does not lead your clients to lead independent lives, so what else needs to happen?
Here is where a needs assessment is crucial.
A needs assessment is a type of evaluation that collaborates with the community (your clients or potential clients) to determine their challenges and how to solve them. There are several methodologies to achieve these results.
Surveys, interviews, focus groups, and observations are typical methodologies to glean information for the needs assessments. The key is to gather as much information as possible (from diverse methodologies) about what your population is struggling with and what they think they need to overcome that challenge. Oftentimes, we assume that we are the experts and know what is best – but in fact, the community is the expert in their own experience. Too often, we ask their challenges without asking what solutions they hope to see. Communities often know exactly what their challenges are and exactly what will best support them – they need someone who will co-create those types of programs with them.
When surveying, interviewing, or facilitating focus groups, collect demographic information. Then, compare responses across various demographics. You want to know if all or most people in a certain demographic are answering in certain ways. Are all of your white clients having a different challenge than all of your Black clients? Perhaps a solution that would work in your white American community would not work as well in your immigrant community.
When designing your needs assessment project, consider the following:
1. Create an ad hoc advisory board. This board will help you draft questions, recruit study participants, and make sense of the results. Having an advisory board (made up of people from the community) ensures you make the right assumptions based on the information you are given. When analyzing data from your perspective, you will likely design a program that looks vastly different than when someone with lived experience analyses the same dataset. Experiences matter and are crucial in the design and analysis process. Your job is not to have the answer but to elevate the voices of those who have the answers.
2. Triangulate your data by engaging in multiple research methodologies. Use surveys, interviews, focus groups, and observation together. When you have multiple methodologies and the same information is being repeated across each method, you know it is an important feature in your dataset. Stagger your methods so they work with and not against one another. You could conduct a series of interviews or focus groups first to glean key information from important stakeholders, then use that information to design a survey to go out to the masses. Or you could design a survey to go out to the masses to see where people are in general terms, then use those survey responses to design your interview or focus group questions. Either strategy is excellent, depending on the goals.
3. Consider who you engage in each type of methodology. Understand that you obtain rich, in-depth details about someone’s life with interviews. A focus group gathers details (but not in-depth) on multiple people simultaneously. Focus groups are excellent when gathering data from multiple perspectives; remember that the data may not be as detailed as you need. Sometimes, it is a good idea to follow up a focus group with a few in-depth interviews with people from the focus group. Additionally, the questions you ask will determine your methodology. You don’t want to ask sensitive or potentially embarrassing questions in a focus group. Finally, you want the most detailed and rich information from those most important to the issue. Consider interviewing those you think are key to the issue and not necessarily the most important based on their title. For example, when determining what barriers someone with a mental illness has to employment, you may want to ask someone unemployed rather than asking a CEO who makes no hiring decisions their perspective.
Regardless of the topic, there are some key questions I always like to ask that help set the stage.
1. What does an average day look like for you now?
2. What would your dream day look like for you?
3. What is missing between your average day and your dream day?
4. What are some of the biggest barriers you face regarding (organizational mission)?
5. What would that look like if you could design a solution that would alleviate some of your biggest challenges?
Overall, needs assessments provide rich information on gaps between what you are currently doing and what needs to be done based on the perspective of the community you serve. Multiple methodologies and strategies will help you on your journey. The key to remember is community engagement – do not make decisions without community involvement.
If you are interested in conducting a needs assessment but don’t know where to start, contact me at Jodie@ChangeAmplifiers.Com
How Evaluation Supports Program Design and Implementation
As nonprofit leaders, we continuously design or modify programs to better serve our clients. The question becomes: “How do we design programs so they meet the direct needs of our clients?”
The answer to this lies in the program design, implementation, and evaluation cycle. This cycle begins with conducting a “needs assessment.” Based on the “needs assessment,” the program can be designed, and funding can be secured. Once funding is received, program implementation can begin. Finally, at regular intervals, program evaluations occur.
The “needs assessment” is the foundation of this whole process. A “needs assessment” engages potential clients to hear their biggest challenges and potential solutions directly from them. The key to this phase of the cycle is engaging the community. So often, nonprofit organizations skip this step because they think or assume they know the community's needs. The community tends to know what they need – they just need to be empowered to ask for it.
Let’s simplify the example of housing. A nonprofit organization can assume that if there is a housing crisis, people need housing – so they design a program that builds houses. However, after building homes, the organization finds no one buys them. Why?
When asking the community what they need, they say their biggest need is employment to afford housing. Building new houses won’t solve the problem because families do not earn enough to buy a home. However, a program offering employment training services may lead to families having the ability to stabilize their housing situation. This example helps demonstrate that the community is the expert in knowing what they need. So, just ask!
Now that you know what the community needs, it is time to design the program. Since the community knows what they need, create an advisory board of community members to guide the program design, clarify issues that may arise, and build buy-in from the community. Ensure you draft a program design – often in the form of a Theory of Change or Logic Model (more to come on that in March!)
Your advisory board and you have designed a program that could work for your community. Now what?!
Securing funding is a critical component of the cycle. As nonprofit leaders, you fully grasp the importance of funding. It is more difficult to secure funding for a new pilot program than for an established program with results. There are several grant-makers, however, that will support pilot programs. Often, smaller, family-owned foundations will support pilot programs if they are passionate about the program's goal. Another funding avenue is “major gifts” fundraising. Individual philanthropists may feel passionate about your program and will donate substantial financial capital to support it. You may have to apply for multiple grants or seek donations from multiple philanthropists to secure enough funding to implement your program. This is where your program design (your Theory of Change or Logic Model) will come in handy. Grants (and philanthropists, to an extent) want to see that you’ve thought about what your program will do and how it will do it. Use your Theory of Change or Logic Model to work for you.
Finally, a major donor who is particularly passionate about your new program has come forward. Now is the time to implement the program. Implementing programs is what we do every day – we know we need to hire staff, recruit clients, and begin performing services.
After conducting services for a year, you are interested in determining if your program is working. Now is the time to conduct an impact evaluation (more to come in April!) It is best practice to evaluate the program annually to determine if modifications must be made. Additionally, having results is a plus when it comes time to apply for more grants or seek more donations! You can conduct the evaluation internally, using these blogs to offer support and guidance, or hire an external evaluator with expertise in conducting needs assessments and evaluations. Tips and tricks on working with an external evaluator will come in June!
In summary, evaluation is critical to the program design and implementation cycle. It ensures you are designing programs that truly meet the complex needs of the communities you serve; it allows you to see your impact, and, of course, it helps you secure additional funding. Use this blog series over the course of this year to develop your evaluation program! Reach out to me at Jodie@ChangeAmplifiers.Com for questions about all things evaluation!