Evaluating programs' success is an important part of a programming librarian's work. Knowing what worked — and having information to back up your claims — can help you explain your successes to bosses and board members, apply for grants, and plan programs that serve your community even better in the future.
Virtual programming is no exception. Collecting information before, during and after each virtual program is important to ensuring that library staff can evaluate its success. I share some strategies below. I also presented this information in a Programming Librarian webinar, "Evaluating Program Success in a World Gone Virtual"; check out the recording below.
Before your program
We rarely think about what can be collected before our program or event, but a lot of data can be mined from pre-event marketing. This information can help you start to evaluate your program even before it occurs.
Metrics like email open rates, email click-through rates and registration numbers can give you a baseline understanding of whether your audience is interested in a program and whether they plan to attend. Dive further into your email platform's analytics and you can see precisely who is opening and clicking into your emails.
Also look at pre-event engagement on your social media posts. Higher numbers of impressions and engagements often translates to higher event attendance, but not always. It can, however, be an indicator of what people are interested in overall. There are many reasons why someone might not attend even though they are interested in the topic, so consider trying it again at a different time or day.
During your program
At an in-person event, attendance is simple: we can record the number of people we get through the door. If an attendee stands up and leaves the room in the middle of a program, we might make a mental note of it and adjust our attendance count, but that doesn't happen very often.
It's much easier for patrons to leave in the middle of a virtual program, and whether your audience sticks it out for the whole program can be a useful metric. You can easily record how many people leave an online event and observe those metrics rise and fall. This is one efficient way to track which virtual programs perform best.
You should also pay attention to the level of interaction and engagement during a program. At an in-person event, you might take some notes about the energy level in the room or the questions that were asked during Q&A. In virtual programs, you can capture chat box comments or enable tools like polling and surveys to have attendees signal that they are engaged.
After your program
Numbers are great, but they will never tell the whole story of whether your program succeeded or failed. Anecdotes and quotes from participants can help bring your statistics to life.
Following up with attendees within two or three days of a virtual event can provide a wealth of information. You can do this through a free SurveyMonkey form or Google Form emailed to your list of attendees and registrants.
Open-ended questions work best. Here are some same questions I like to ask in my surveys:
- What did you like?
- What are some topics for future programs you would like to see?
- How can we improve the experience for the next event?
The more specific feedback you can get, the better you can make your next virtual event. I also make a point to follow up with no-shows and non-responders and ask what prevented them from attending the event.
It's also helpful to gather relevant information provided by the analytics of the platform used whether that be Zoom, YouTube or Facebook Live. You might be surprised by all the information you can get from these platforms. For instance, YouTube analytics provide average view duration, peak viewing times, and how people came across the video in their streams. Facebook analytics provide information on peak viewing times, the average video completion percentage, and how many people only viewed the video for 10 seconds. Zoom analytics can tell you what types of devices people used to watch the program and what connection problems participants had.
Ask questions and reflect
Finally, I take some time to reflect after each program. Here are some questions I ask myself:
- Did I meet my goal(s) for this program?
- Were there any stand-out interactions?
- How is the progress on follow-ups? What is the feedback?
- What worked well?
- Could anything have gone better?
- Would I do this program again? Why or why not?
Evaluating program takes some time, but the information you get is well worth the effort. Have you found other ways to evaluate your virtual programs? Share them below in the comments.