Wikimedia Education Greenhouse/Unit 3 - Module 2

From Wikiversity
Jump to navigation Jump to search

Impact evaluation



A year from now...[edit | edit source]

Imagine we're in the year 2021.

You have successfully completed this online course (congratulations!) and you have developed your project idea from start to end (double congratulations!). You implemented your evaluation plan and collected a lot of information about the process of your project and the goals achieved. You presented this information through a comprehensive report and shared it with different stakeholders.

To think a little bit deeper into some of these details answer the three questions below!

  1. Why did you create this evaluation?
  2. Who read your evaluation report?
  3. Then what happened?

Let's start with the "Why?[edit | edit source]

Evaluation is a complex and extensive topic. You can find entire books, courses, conferences, etc. dedicated only to exploring this area of project management. This lesson will take you through the basic concepts needed to understand the purpose of impact evaluation, designing an effective and feasible evaluation plan, and understanding what to do with the information collected through your evaluation activities.

Listen to Vasanthi Hargyono walk us through the importance of designing an evaluation strategy with purpose at the base. As you watch the video, reflect on the following questions: Is this practice relevant for your Wikimedia education project? Have you done this or a similar process before? Feel free to share your answers on the Discuss section!

W&E Greenhouse - Impact evaluation (Unit 3, Module 2)

You can find the full transcript of this video on this link.

What and how?[edit | edit source]

What?

Conducting an impact evaluation allows us to determine what changed as a result of our Wikimedia education project and acquire valuable lessons about our project that will help us improve future interventions. Mercy Corp's Guidebook on Design, Monitoring and Evaluation[1] tells us that impact evaluations "seek to determine the results of a project; its impact or effects on the target population". UNICEF[2] adds that impact evaluation "goes beyond looking only at goals and objectives to also examine unintended impacts". It helps us to answer some main questions: Was our project successful at achieving its outcomes and goals? Will we replicate this initiative? Will we scale up? Will we change it? Will we discontinue it completely?

How?

In the previous module we established that monitoring and evaluation activities have our logic models at the base. Take a look at the example on Section 7: Example of a Logic Model with Evaluation Questions of the UW Extension Course on Logic Models. Can you identify the components of the logic models on the first section and how they relate to the Key Evaluation Questions?In their example you can see there are evaluation questions that correspond to each part of the project's development including the inputs (staff, money, partners) and outputs. In this module we are only focusing on evaluating outcomes and long-term goals: the higher levels of the logic models and the ones that can help us understand if our project is creating the change it intended. In other words, our evaluation efforts are an extension of our logic model. We will explore more of this in the next section.

Additional considerations[edit | edit source]

When planning our evaluation activities we need to take into account additional factors that will determine whether the evaluation plan is feasible. This will influence the type of evaluation activities you can develop and it might require you to prioritize the most important outcomes/goals to measure according to the purpose you have set for your evaluation. Such factors to consider can be:

  • Money - Do you need any financial resources to carry out your evaluation activities? For example: printing surveys, traveling to conduct interviews, recording professional videos, etc.
  • Time - How much time do you have available to conduct evaluation activities? For example: face-to-face interviewing, classroom observations, detailed data analysis, etc.
  • Human resources - Can other members of your team get involved in conducting evaluation activities? For example: dividing up responsibilities for data collection, data analysis, reporting, etc.
  • Expertise - Do you have the knowledge and skills needed to carry out these evaluation activities? Do you need additional support from experts?

From logic model to evaluation plan[edit | edit source]

The final stretch![edit | edit source]

In this section we will see the different elements that can help us to build an evaluation plan with logic models at the base. We will explore this together with the team of Wikimedians you met in previous units. Remember them?

Hey there! It's Laura, Alex, and Isaac again! We are developing the evaluation plan of our WikiCafé project. The main purpose of our evaluation is to learn if our project actions had the intended impact on our participants, and to see if this is a project that we can replicate in other school districts. We intend to share our evaluation report with other Wikimedia communities, with the school principal that supported us, and with potential donors for a future edition of this project.

Baseline[edit | edit source]

At the beginning of the previous unit you learned about how conducting a Needs Assessment before the start of our project can help us to collect data and identify the needs or gaps present in our participants/context. This activity also provides us with a baseline to understand our audience at the beginning of our project and identify the changes that occur as a result of our intervention.

On their Design, Monitoring and Evaluation Guidebook[1], Mercy Crops states that: "For impact evaluations, the baseline data gives a starting point against which further progress can be measured. Without a baseline, it is extremely difficult for an evaluation to gauge project impact".

To see how this looks, let's go back to the project from our team of Wikimedians:

Before the beginning of our project, we conducted a needs assessment to learn more about our participants. We reviewed the surveys and interviews we developed and learned that:

  • Only 1 out of the 15 participants knew how to conduct a very basic SPARQL
  • Only 2 out of the 15 participants stated that they understood the value of Wikidata and other Wikimedia projects as OERs
  • All of the participants expressed that they wanted to be able to use SPARQL queries to create better didactic resources for their classes but they were not sure how to start.

We are very excited to see how their skills and perspectives change after our project implementation!

Evaluation questions[edit | edit source]

Maybe you have seen this quote attributed to Vanessa Redgraves[3]: "Ask the right questions if you're going to find the right answers". The next step to build our evaluation plan is to craft the questions that will help assess how effective our project really was in creating the change we wanted to see.

As general guidelines, K. Shakman and S.Rodriguez[4] state that evaluation questions should:

  • Be relevant and in accordance to your logic model's objectives and impact statements. Can the questions be answered given the program?
  • Be prioritized in accordance to the purpose and audience of your evaluation. What are the key, most important questions?
  • Respond to the resources and availability that the team has to collect this information. Are the questions practical and appropriate to the capacity you have to answer them?
  • Be clear and jargon-free. Would someone who is not steeped in the language of your particular field understand the question?

To see this in action, let's check how our team of Wikimedians have created the evaluation questions for their Wikicafé project:

Alright! First, let's review our logic model:

Inputs and resources Activities (Outputs) Objectives (Short-term and intermediate-term outcomes) Impact (Long-term impact)
Team expertise, Time, Knowledge of teacher's skills and expectations (through needs assessment activities), Meeting space, Wikidata volunteer 2 Wikicafé gatherings per month, 6 in total (O1) Teachers can comfortably conduct SPARQL queries on Wikidata to create additional didactic resources for their classes Teachers improve their education practice by becoming active contributors and users of Wikidata.
Development of teaching guides about Wikidata (O2) Teachers become advocates for the important role of Wikidata in the OER field

Now we are going to focus on the objectives and impact to create our evaluation questions. We created 2 evaluation questions per each of these outcomes, they are the following:

Objectives and Impact Evaluation Questions
(O1) Teachers can comfortably conduct SPARQL queries on Wikidata to create additional didactic resources for their classes Can teachers conduct SPARQL queries with ease and at different levels of complexity?
Are teachers creating the didactic resources they need by using Wikidata and SPARQL queries?
(O2) Teachers become advocates for the important role of Wikidata in the OER field Can teachers express the value of Wikidata in the OER field?
Are teachers excited about the use of Wikidata as a tool to create OERs?
(Impact) Teachers improve their education practice by becoming active contributors and users of Wikidata Are teachers incorporating didactic resources created with Wikidata in their teaching practice?
Are there any obstacles that prevent teachers to implement what the new Wikidata skills and knowledge they acquired through our project?

We think these questions can help us collect accurate and relevant information that will tell us if we achieved the change we wanted to see through our project

Indicators of success[edit | edit source]

Now that we have the questions, how will the answers to these questions tell us if our project was truly successful?

Indicators (also called "metrics") are Specific, Measurable, Accurate, Relevant and Timebound (SMART) statements that provide us with a picture of the desired achievements of our project. They align with the evaluation questions and outcomes/objectives by providing the key values we aim to achieve. To create indicators of success for our evaluation plan, K. Shakman and S. Rodriguez[4] suggest that we reflect on these three questions:

  • What would achieving the goal reflected in the outcome look like?
  • How would we know if we achieved it?
  • If I were visiting the program, what would I see, hear, or read that would tell me that the program is doing what it intends?

Let's see this in practice again with our group of Wikimedians!

We think these are the indicators that will tell us if our project was successful:

Objectives and Impact Evaluation Questions Indicators/Metrics
(O1) Teachers can comfortably conduct SPARQL queries on Wikidata to create additional didactic resources for their classes Can teachers conduct SPARQL queries with ease and at different levels of complexity? By the end of the Wikicafé project, 60% of teachers can conduct 3 types of SPARQL queries with very little assistance from the instructors.
Are teachers creating the didactic resources they need by using Wikidata and SPARQL queries? By the end of the Wikicafé project, 60% of teachers create 3 new didactic resources using Wikidata to incorporate in their teaching practice.
(O2) Teachers become advocates for the important role of Wikidata in the OER field Can teachers express the value of Wikidata in the OER field? By the end of the Wikicafé project, 60% of teachers can easily express the value of Wikidata in the OER field.
Are teachers excited about the use of Wikidata as a tool to create OERs? By the end of the Wikicafé project, 60% of teachers report that they are excited to continue using Wikidata to create their own OERs.
(Impact) Teachers improve their education practice by becoming active contributors and users of Wikidata Are teachers incorporating didactic resources created with Wikidata in their teaching practice? Six months after the end of the Wikicafé project, 40% of teachers report that they continue to use didactic resources they create with Wikidata.
Are there any obstacles that prevent teachers to implement what the new Wikidata skills and knowledge they acquired through our project? Six months after the end of the Wikicafé project, less than 30% of teachers express that they have faced some obstacles that prevent them from using Wikidata in their teaching practice.

Data collection methods[edit | edit source]

We created clear evaluation questions and set measurable indicators of success for our project. How do we know if we reached these indicators? What tools can we use to collect this data?

Deciding the right data collection method will depend on the type of information you want to document (quantitative? qualitative? both?), the availability of your resources, and the purpose of your evaluation. We have seen different data collection methods in the module about needs assessments and in the previous module on monitoring and course correction. You can resort to already-made data collection tools, modify past tools, or create your own from scratch. For more alternatives you can see a handy chart of data collection methods for different purposes on this link[5]. In any case, it is important to remember to allocate the necessary time and resources to this process as you did to other stages of your project planning.

Let's see what data collection methods our team of Wikimedians is using:

Our evaluation plan is looking good!

We are moving the information we collected into a table so it is more organized and easier to understand for external audiences.

We decided that we will design our own data collection tools since we could not find any existing ones that responded to our needs. We also gathered the contact information from the teachers and their approval to interview them 6 months after the project. Sadly, we think we might not have the time and resources to visit each teacher to do classroom observations or to conduct printed surveys so we will have online surveys instead and rely on their honesty. We are also planning to conduct interviews via Skype because of the same reason. We are positive that it will be a very informative experience!

Data analysis and report[edit | edit source]

UNICEF's Overview of Impact Evaluation[6] states that "evaluations must focus on producing useful and accessible findings, not just academic reports". What is the use of all the data that you collected through interviews, surveys, tests, etc. if you don't effectively organize it and present it to the relevant audiences? After collecting the needed information through your chosen data collection methods - for example: interviewing your participants, reviewing tests and surveys, etc. - your next step will be to reflect on what that information tells you: how it responds to your evaluation questions and how it stands against your indicators of success. As MercyCorps' Guidebook[1] puts it: “What does this mean for our project?” and “What conclusions can we draw from this information?”.

As with other activities we have seen earlier, analyzing the data you have collected does not have to be a solitary endeavor. Engage different members of your team, seek thought partnerships from your internal stakeholders, discuss your reflections with others to make sure your conclusions are objective and unbiased.

Finally, the way you will organize this information to share with different audiences can vary. A written report (with various degrees of detail and depth) is appropriate and needed, but do not be afraid to get creative! Contributors to the National Council for Voluntary Organisations (UK)[7] suggest other reporting formats to present your evaluation, such as:

It all comes down to who your audiences are and the relevant information and lessons you want to share with them.

Course Portfolio Assignment: The start of an evaluation plan for your Wikimedia education initiative[edit | edit source]

Roxyuru / CC BY-SA

Let's start building an evaluation plan for your Wikimedia education project!

  • Step 1: Review the logic model you created in the previous unit (if you have not completed that module yet, take some time to do so and create a logic model for your project idea following that lesson) or another logic model you have previously created.
  • Step 2: On your Course Portfolio, create a section called "Evaluation Plan". First, write down the main purpose of your evaluation (the "why") and the stakeholders that you would share this information with (the "who").
  • Step 3: Choose two outcomes/objectives to focus your evaluation plan on (they can be the same ones you chose for the previous activity on monitoring) and your long-term goal/impact.
  • Step 4: For each outcome/objective and for your long-term goal/impact, create two evaluation questions. For each evaluation question indicate the data collection method you will use and when you will collect this information. Remember to use a visual representation that is easy to follow (such as the tables presented in the example of the previous section).

If you need some inspiration, check out the work of participants of the first cohort of the Wikimedia Education Greenhouse online course:

References[edit | edit source]

  1. 1.0 1.1 1.2 MercyCorps - Design, Monitoring and Evaluation GUIDEBOOK - https://www.pm4dev.com/resources/manuals-and-guidelines/119-dme-guidebook-mercy-corps/file.html
  2. UNICEF-IRC - Overview of Impact Evaluation
  3. "Vanessa Redgrave". Wikipedia. 2020-06-19. https://en.wikipedia.org/w/index.php?title=Vanessa_Redgrave&oldid=963342605. 
  4. 4.0 4.1 Shakman, K., Rodriguez S. - Logic models for program design, implementation, and evaluation: Workshop toolkit - https://files.eric.ed.gov/fulltext/ED556231.pdf
  5. "Data Collection Method Decision Matrix". Google Docs. Retrieved 2020-07-01.
  6. Rogers, Patricia - Overview of Impact Evaluation - https://www.unicef-irc.org/publications/pdf/brief_1_overview_eng.pdf
  7. Brennan, Rob. "How to use creative reporting formats for evaluation — NCVO Knowhow". knowhow.ncvo.org.uk. Retrieved 2020-07-01.
  8. "Week 16: Infographics to make your evaluation results go viral". BetterEvaluation. 2014-04-17. Retrieved 2020-07-01.
  9. "RSA Animate - RSA". www.thersa.org. Retrieved 2020-07-01.
  10. "What we learned about closing Wikipedia's gender gap using a classroom assignment". Wikimedia Foundation. 2019-09-26. Retrieved 2020-07-01.
  11. "I Challenged My Students to Design My Classroom and... from Every Classroom Matters With Cool Cat Teacher". www.stitcher.com. Retrieved 2020-07-01.


Go back to Unit 3 - Module 1
Go to Unit 3 - Module 3