Curriculum Planning/Evaluation

From Wikiversity
Jump to navigation Jump to search

Evaluation of Training[edit | edit source]

The terms assessment and evaluation are often used interchangeably. For the purposes of this document, assessment will refer to measurements of student learning (through tests, demonstrations, questions, etc.), while evaluation will be used to refer to attempts to measure the value of a program.

Note that the value of an educational program depends on far more than learners’ test results. For example, if all graduating learners scored high marks on a final exam but 50% of them dropped out before the program was half over, the program may be evaluated as not very successful.

The value of evaluating a training program cannot be underestimated. Gathering some data about how well a program worked, organizing that data so it will be understandable and useful to others, and saving that information so that it can be accessed by future trainers means that a program can be easily offered more than once, continually improved.

When planning ways to evaluate a program, you may consider the following:

Evaluation guidelines specified by the project:[edit | edit source]

Programs funded by an outside agency often include specific evaluation factors or key performance indicators in the project plan. These will need to be addressed before the project can be considered complete.

If evaluation guidelines are not laid out in the project plan, you will want to create your own guidelines. First, think about:

  • What do you hope this program will achieve? In the short term? In the long term?
  • What do the learners and the community hope this program will achieve?
  • How will you know that this program was successful? (“What will ‘success’ look like, in this case?”)
  • Will this program be offered only once? Many times? In a number of locations? With a number of different trainers?
  • If this program will be offered more than once, it would be useful to collect information so that it could be improved and updated into the future. What information would support continuous improvement?


Next, think about what information you will be able to collect:

Baseline data:[edit | edit source]

Collecting baseline data before a learning project begins is ideal (but not always possible). If, for example, a goal of your program is to increase literacy skills, it will be important to know at what level learners’ literacy skills were before the program began.

Formative evaluation: [edit | edit source]

Find ways to determine how well the program is working before the program is over. Formative evaluations let you know what’s working and what’s not; you can use this data to make mid-course corrections before it’s too late to salvage the program. For example, you can check with learners (either orally, informally or with a short survey) shortly after the program begins to make sure that the training approach, the learning environment, and the course materials are meeting learners’ needs. You may also do an interim assessment of learning with a mid-term test or demonstration of learning.

Summative evaluation:[edit | edit source]

A summative evaluation is an evaluation done when the program is finished. Although the most common summative evaluation is done by asking learners to complete a short written survey immediately when the course wraps up, summative evaluations may be done at various times when a program is over and for a variety of reasons. The nature, length, and available funding for your project will determine how many of these evaluation steps you’ll want to consider.

The Kirkpatrick Model details four different times when a summative evaluation might be useful, for four different reasons:

  1. Learner reaction (Kirkpatrick Level 1): This evaluation is carried out immediately after the training is completed. It is sometimes called a “smile sheet” when it is designed to simply capture how happy learners are with the training and the trainer. A good Level 1 evaluation will also capture how relevant the learners expect the training will be, and how engaging they felt the experience was. Learner reaction can be evaluated with a simple written survey, an informal post-training conversation, or a focus group.
  2. Evaluation of learning (Kirkpatrick Level 2): This evaluation may be carried out immediately after the training is completed or shortly afterwards. The evaluation often takes the form of a final exam, oral exam, or demonstration of skill.
  3. Behaviour change, transfer of learning (Kirkpatrick Level 3): To be truly useful, learners will use what they have learned in a program and will change the way they do things accordingly. This type of evaluation will be conducted at some point after learners return to the workplace or the community and apply (or not) what they learned in the training to the job at hand. This kind of evaluation might be done with a short survey circulated to the learners’ employers a month or so after the training.
  4. Training results and impact (Kirkpatrick Level 4): This type of evaluation may be scheduled for some length of time after the training period is complete. For example, if a goal of a training program was to increase the number of entrepreneurs in a community, it may take some time (months) before you will be able to count and report on new business start-ups.

Recommendations for revision: [edit | edit source]

Very short and informal training sessions may not include any formal evaluation. But at the very least, it is valuable to have some way to capture the trainer’s feedback about how the training went and what might be improved for next time. A one-page handwritten summary, included with the training materials, could capture trainer experience by responding to questions like the following:

  • How was the time allotted for this training? Too much time? Not enough?
  • How were the training materials? Would you recommend different materials next time?
  • How was the class size? Would this training go better with more (or fewer) learners per class?
  • How was the learning environment? Did you have everything you needed? Would some additional resources have been helpful?


<-- Previous page / Next page -->