Scholarship Program Evaluation Plans: Lessons Learned from the Field
I landed my first job in the scholarship industry nearly a decade ago with a small-yet-mighty provider that served low-income women in my state. The organization brought me on board to overhaul their programs, which had operated in much the same way for over 20 years.
In order to understand how that overhaul was impacting our recipients, we created and implemented their first-ever evaluation plan. It took a bit of work and a lot of creative thinking, but the information the evaluation plan provided about how we were doing was invaluable.
Many scholarship providers want to assess their efforts, but aren’t sure if they have the capacity or don’t know where to start. But, trust me: If I could do it with a team of just 1.25 FTE and almost no program evaluation experience, so can you!
So, what is an evaluation plan and how do you develop one? Read on to learn about the six key elements of the evaluation plan I created:
- Evaluation Design: Creating a Logic Model
- Activity Reports
- End-of-term Scholar Surveys
- Post-award Surveys
- Alumni Surveys
- Scholarship Reviewer Reports and Surveys
You can customize these tactics and make them your own, or use them to inspire your unique evaluation plan.
1. Evaluation Design: Creating a Logic Model
Like other organizations and programs, I started my evaluation planning by creating a logic model. Logic models provide visual depictions of the changes you intend to make possible through your programming. It describes the resources required, activities you’ll undertake, and what you hope to accomplish as a result of those activities.
To begin creating a logic model, first determine the impact(s) you hope to make on the students you serve. Then use that “ideal future” to answer key questions.
EXAMPLE
My organization used a logic model to outline our programmatic work and guide our evaluative activities as shown below:
First, we asked ourselves:
- Impact: What future does our organization envision for those that we serve?
Answer: Low-income women have increased opportunity to achieve their desired potential and enrich their community and world
Then we outlined:
- Inputs/Resources: What type of resources does this work require, such as staffing, other stakeholders, funding, or capital?
Answers: Funding from donors, staff time, volunteer time - Activities: What actions do we take with our resources to achieve our outcomes and impact?
Answers: Make scholarships, provide technical assistance as needed, offer connection and support through relationships with the organization - Outputs: What are the results of our activities that make our outcomes possible? These are often expressed with numbers, such as total scholarship recipients or events hosted.
Answers: 155 scholarships per year, women use funding to attend college/pay for college-related expenses - Short, Medium, and Long-Term Outcomes: What change do we make for those that we serve?
Answers: Increased confidence, achievement of a woman’s stated goals
For more on logic models and how they can help keep your program moving towards your intended impacts, check out our webinar, Logic Models: More Than Just Extra Work!
2. Activity Reports
Ten years ago we tracked our application and award activities in spreadsheets, updating them with detailed information on each applicant or scholarship recipient. Today, scholarship management platforms such as Foundant’s Scholarship Lifecycle Manager (SLM) allow for much easier, more accurate reporting on key program data such as:
- Number of submitted and draft applications
- Number and total dollars of scholarships awarded
- Total dollars and percent of budget spent and unspent
The scholarship provider’s board requested updates on such application and award metrics as part of its quarterly Programs Committee report. We also included cumulative data and evaluation results in our annual fact sheet and other communications.
3. End-of-term Scholar Surveys
At the end of each fall and spring academic term, our scholars completed end-of-term scholar surveys. These allowed us to understand scholars’ persistence through academic terms and to graduation. Scholars’ responses also offered us greater insights into their academic, personal, and professional achievements (as well as any issues they were facing).
We asked questions such as:
- What is your updated GPA?
- Do you intend to continue at your current college/university next academic term?
- Are you on track to graduate?
- Share a highlight of your past term.
- What are you most looking forward to next term?
Responses offered individualized reports on scholars, and also allowed us to aggregate metrics for scholars as a whole. We were able to share key data with our community -- including donors. This included scholars’ average GPAs and the percentage that were on track to graduate.
We evaluated our end-of-term scholar surveys each winter and summer, and shared both quantitative and qualitative data about scholars’ successes in our annual program fact sheet and in other organizational and programmatic communications.
4. Post-award Surveys
At the end of award periods, we asked scholars to complete post-award surveys. This typically took place twice annually, once after the end of the fall academic term and once after the end of the spring term, and included questions such as:
- Did this scholarship help you meet your educational goals? If so, how?
- Are there any ways we could improve our service to scholars like you?
- How was your experience working with the Foundation’s staff and the organization in general?
- Do you feel the Foundation contributed to your success? What would you give the Foundation credit for?
We also recorded anecdotal feedback provided during conversations and meetings with our scholars. We kept a shared document in which we entered both highlights and opportunities for improvement.
On a twice-annual basis, we reviewed and analyzed the data, looking for trends. If our process and programming were working for scholars, great! We continued with business as usual. But when we discovered trends that weren’t in line with what we’d outlined in our logic model, we worked to refine our activities, and sometimes even tweaked our established outputs and outcomes.
These post-award surveys and anecdotal feedback helped us make many significant programmatic updates. For instance, we learned that even though we covered book expenses during award periods, many scholars struggled to pay for supplies vital to their courses of study. Nursing students needed scrubs and stethoscopes, and chemistry students needed lab equipment. After learning more about these supply needs, we updated our policies to allow scholars to purchase them using their scholarship funding. In a single year, that one policy revision helped dozens of students reach their educational goals, allowing our organization to achieve our stated outcomes.
5. Alumni Surveys
Each summer, we conducted an alumni survey to understand the longer-term outcomes and impact of our programming on the women we served. Most years, we sent an online survey to all alumni we had contact information for; one year, we even tried sending snail mail postcards to scholars’ last known addresses.
Our survey questions focused on key changes we hoped to influence in our scholars’ lives, including college persistence and graduation, poverty, and employment. Questions included:
- Did you complete your college degree?
- What is the combined annual income for your household?
- Are you currently employed?
- How did the scholarship impact you and your life?
We assessed alumni survey results annually and reported on them in our annual program fact sheet and in other organizational and programmatic communications.
6. Scholarship Reviewer Reports and Surveys
During my time at the organization, we launched a competitive scholar selection process in which, for the first time, volunteers were asked to identify scholars who best fit the organization’s vision, mission, and values.
We used a simple spreadsheet to track reviewers’ participation, including data points such as:
- Number of scholarship reviewers invited, participating, and not participating in the scholar selection process
- Number of scholarship reviewers invited and participating in making congratulatory calls to selected scholars
In order to understand how well that selection process worked for the volunteers we engaged, we also administered a scholarship reviewers’ survey after each review cycle that included questions about:
- The effectiveness of the selection process
- Scholarship reviewers’ satisfaction with the selection process
Reviewers’ candid survey responses helped us to refine aspects of this brand-new initiative, and even offered suggestions for overall process and program improvements.
We analyzed this information annually and shared highlights in our annual fact sheet and other communications.
Summary
While evaluation can range from the simple to the extremely complex, I can attest that activities such as those described above went a long way towards helping my organization achieve its intent efficiently and effectively. Whatever you come up with, I hope your program evaluation work helps you stay mission-focused while making meaningful changes to better serve your participants.
This blog is an original work of the attributed author and is shared with permission via Foundant Technologies' website for informative purposes only as part of our educational content in the philanthropic sector. The views, thoughts, and opinions expressed in this text belong solely to the author and do not necessarily reflect Foundant's stance on this topic. If you have questions or comments, please reach out to our team.