Funders - Take a Tip from Michelangelo
“Ancora Imparo - I am still learning.” -Michelangelo.
Evaluation. Learning. Collective Impact. Outcomes. Change.
If you live in Foundation Land, chances are you’ve probably heard these words on more than one occasion, probably from your boss or your Board (or both!). Prior to my life at Foundant, I was a program officer at my local community foundation and we had just shifted into a “how do we know what impact we are making in our community?” mentality, so these words started popping up A LOT.
During my time at the York County Community Foundation in York, PA, we had focus areas of education, workforce development, and downtown and neighborhood revitalization. There are probably a million different ways you could measure success in all of those categories. To answer our own question, we knew we needed to quantify that change to see how our grantmaking was affecting our community. Our challenge was to talk to a lot of people, read a lot of articles, and narrow down the change we wanted to see into specific outcomes and indicators.
There’s some definite risk in this. First off, when you dictate to a nonprofit that their program must achieve this hyper-specific thing, they have to decide if a.) that fits within their mission and b.) if they have the capacity to do it. This had side effects as well: nonprofits who had that tertiary tie-in to the foundation’s focus areas are really no longer eligible for grant funding. We had to be okay saying no to them.
And wait for it: more risk! It’s entirely possible those months you spent talking to experts, reading the articles and journals, and hours and hours of research to determine those 2-3 outcomes and indicators that demonstrate change in your community are outdated or quite simply, the wrong ones. Not only did we need to be okay with this, our Board did too.
A good example happened in our education focus area. In working with one of the most impoverished school districts in the state, using bar-like government-mandated standardized testing was not a good indicator of student academic growth. (i.e. - If a child is in 8th grade and reading on a 1st-grade level, you can’t utilize standardized tests to measure academic achievement because the student will always be behind.) Instead, we found a type of testing that measured growth over time, so if that 8th grader jumped to a 5th-grade reading level within a year... now, that’s progress! I think that’s a really important thing for funders to remember: you need to be able to pivot your gameplan. After all, communities aren’t static; they change and you must have the ability to change with them.
Our first iteration on this was with a grant process where we had “preferred outcomes.” This turned out to be an interesting pilot. The grants that came in aligned with those outcomes really nailed it. From there, we narrowed down further for our next grant cycle and required applicants to achieve the outcomes we listed. We then split into two branches - if grantees could quantify their impact within our chosen indicators, they could apply for more funding (and thus, in theory, achieve impact on a larger scale). If an applicant could meet our outcomes but quantify indicators in a different way, they could still apply for funding but were capped at a smaller amount. It was a good compromise - getting money out the door to a diverse group of nonprofits and programs while also being able to quantify our impact of grantmaking from aggregating grant outcomes with similar data points.
I actually asked a grant writer once if having these hyper-specific outcomes as a requirement was a detriment to her. Her answer surprised me. It went something like:
“Are you kidding me? I actually know what you’re looking for in what a fundable request looks like for your foundation. I knew exactly which of our programs could fit with that and which could not. It saved me a ton of time.”
Not all foundations are going to get so granular with their outcomes and dictate to their applicants the change they want to see in the community. That’s okay (notice an “okay” theme happening here?). I think the biggest thing funders should take away is that organizational learning should be part of the foundation culture. No one knows everything and how many times do you hear “you don’t know what you don’t know”? Because of that, we have to constantly learn from our processes, learn from our mistakes, and constantly strive to improve. Our donors, our grantees, our grantees’ clients, and our communities deserve the best. So, let’s try some new things! And if we fail? We fail forward.
At Foundant, I now get to help clients with the other side of evaluation, learning, and collective impact. It’s exciting to be able to take my knowledge and experience and apply it to other organizations throughout the philanthropic sector!