facebook google plus twitter
Webucator's Free Instructional Design Tutorial

Lesson: Evaluating the Effectiveness of the Learning

Welcome to our free Instructional Design tutorial. This tutorial is based on Webucator's Instructional Design Training course.

The final step in the instructional design process is evaluating the effectiveness of the learning.

Lesson Goals

  • Learn why evaluation is important.
  • Learn about Kirkpatrick's four levels of assessment.
  • Learn to assess knowledge, skills, behavior, and attitude.
  • Learn about formative and summative evaluations.
  • Learn about measuring return on investment (ROI).

Why Evaluate: The Importance of Evaluation

An important aspect of learning is assessing its success and effectiveness, both as the learning is taking place and after it is completed. There are a number of reasons for this.

  • For evaluation that takes place during learning, facilitators can use this information to address specific areas in the learning.
  • Clients want to know if learning objectives were achieved.
  • The instructional designer will want to incorporate any information gained into a future iteration of the course or other courses.
  • The ID will also need to know if any follow-up learning should be delivered.
  • The learners' manager will want to know if her employees gained the necessary skills to improve their on-the-job performance.
  • The organization will want to know if their training investment has paid off.

What Is the Purpose of Evaluating?

The purpose of evaluating is mainly to determine if the goals and objectives of the learning have been achieved.

Kirkpatrick's Four Levels of Assessment

Kirkpatrick's four levels of assessment is a framework for measuring learning effectiveness. Developed in the late 1950s and updated multiple times since then, the levels provide a way for training to be updated so it can be improved in the future.

Reaction: Level 1

Reaction measures the learners' reactions and feelings about the training. This level essentially asks: Did they like the training? The ideal outcome is for learners to feel positive about the training, that it was effective and enjoyable.

The information gained from this level can be used to improve future training.

To measure reaction, IDs often use surveys or questionnaires issued to students. Questions to ask could include:

  • Did you feel the instruction was effective?
  • What are your thoughts on the facilitator?
  • What were the strengths and weaknesses of the instruction?

Learning: Level 2

The next level is level 2, the learning level. This level seeks to answer the question: Did they learn it? The ideal outcome for this level is that learners' knowledge will have increased as a result of the training.

The learning objectives that the instructional designer outlined during the process are what should be measured at this level.

The learning level is often assessed through an end-of-course quiz to determine if the learning objectives were achieved. Sometimes, the questions can be issued before training begins and then after it is completed, to measure achievement.

Behavior: Level 3

The next level is behavior. Did they use it? In other words, did learners apply the knowledge that they gained? It's important to keep in mind at this level that there may be barriers to behavior changes. Perhaps learners did gain new knowledge but are unable to apply it on the job because of management's reluctance to change. Conditions have to be favorable for a behavior change to take place.

Measuring behavior is something that should take place over time. One way to accomplish this is to go into the learners' workplace and observe behavior. Other means of gathering this information could include interviews with learners or their supervisors and questionnaires. It is important to note if the behavior change is being supported within the organization.

Results: Level 4

The final Kirkpatrick level is the results level. The question here is: Was the impact of the instruction felt in the organization?

At this level, the ID analyzes the results of the learning. This may be the most difficult level to measure. Like the behavior level, this level needs to be assessed over time, post-training. Depending on the training, considerations could include things such as:

  • Has employee turnover decreased?
  • Has employee job satisfaction increased?
  • Are there fewer customer complaints?

At this level, like level 3, measuring would likely take place over time. Measurement involves measurable results, such as increased sales figures, higher scores on customer satisfaction surveys, or decrease in late deliveries, for example.

More on Kirkpatrick's Four Levels of Assessment

There are some considerations to keep in mind about Kirkpatrick's four levels of assessment. The third and fourth levels, behavior and results, may be impractical to assess. It can be expensive and time consuming to evaluate learners on these two levels.

The model is a scientific model of assessing learning, but in reality, utilizing all four levels is often impractical.

Reviewing the Four Levels of Assessment

Duration: 15 to 20 minutes.

In this exercise, you will use your knowledge of Kirkpatrick's levels of assessment to answer the following questions.

Keep in mind this scenario from the previous lessons: You are working as an ID in your organization, and the head of human resources has come to your group with a specific need for the group to develop some training on working with difficult coworkers.

  1. You are planning to interview a few participants of the training to determine their reaction to it. What are some questions you might put on the questionnaire you plan to administer?
  2. How might you measure behavior on learners who took the course?
  3. Can you think of ways you might assess the results level?


  1. Answers will vary but may include: did you think the training worked? Did you think the facilitator was skilled? Is there anything you would change about the way the training was delivered?
  2. Answers will vary but may include the following: at some point after the training was complete, you could spend a few hours with the team, watching them work and noting if they were experiencing less conflict. You could also speak to the learners' manager to see any changes have been noticed.
  3. Answers will vary but may include meeting with the team's manager and human resources a few months after training is complete to go over if a decrease in conflict on the team has been noticed. Another thing to do would be to ask if employee turnover has decreased.

Assessing Knowledge, Skills, Behavior, and Attitude

Instructional design ultimately seeks to bring about change in knowledge, skills, behavior, or attitude. In our business writing class, we wanted students to improve their writing skills, which would be a change in knowledge and skills.

How to Conduct Evaluations

The purpose of evaluations is to determine the level of success of the learner, that is, have instructional goals and objectives been met? Two criteria for determining this are validity and reliability.

When designing evaluations, the ID should attempt to ensure that the evaluations are valid. A valid evaluation is one that allows the ID to determine if learners met the objectives. They should also be reliable. A reliable evaluation is one that, if administered at different times to the same learner, would produce the same results.


So what tools does an instructional designer need to assess these changes?

  • Question-based Tests: Used for measuring knowledge and skills obtained. For example, if you created a course on the basic features of PowerPoint, you might evaluate learner skills and knowledge using a 20-question test on the topics that were covered.
  • Observation: Used for measuring attitude change and behavior. If the course was on motivating employees as a manager, to evaluate this, the learner could be observed in interactions with employees.
  • Direct testing: Used when specific skills need to be evaluated. For example, in a course on creating a sales spreadsheet, learners may create an actual spreadsheet for the evaluation.

Reviewing How to Assess Knowledge, Skills, Behavior, and Attitude Changes

Duration: 15 to 20 minutes.

In this exercise, you will use your knowledge of assessing changes in knowledge, skills, behavior, and attitude to answer the following questions.

Keep in mind this scenario from the previous lessons: You are working as an ID in your organization, and the head of human resources has come to your group with a specific need for the group to develop some training on working with difficult coworkers.

  1. You would like to determine if the students have learned how to resolve conflict. How might you do this?


  1. Answers will vary but may include that you could observe the learners in the workplace, looking for a situation that causes conflict, and then assess how they respond.

Formative and Summative Evaluations

Two different forms of evaluations are often used in instructional design, to assess if learners have gained the knowledge that they should have:

  1. Formative evaluations.
  2. Summative evaluations.

Formative Evaluations

The purpose of formative evaluations is to evaluate learning as it is taking place. Formative evaluations serve to guide the learning, by gathering learner feedback.

Often, formative evaluations employ qualitative rather than quantitative feedback. This feedback can be used to guide learners as the instruction continues.

For example, in an online course, a formative evaluation may take the form of a quiz at the end of a topic. If the learner gets a question wrong, the course can be designed to take the learner back to the specific screen where the information was, so that he or she can review it.

Summative Evaluations

The purpose of summative evaluations is to determine the overall effectiveness, or the sum, of the learning at its conclusion. Summative evaluations are most often quantitative; that is, a score will be recorded.

That score can be used for a variety of things. In a workplace, it is often recorded and reported to a supervisor as proof that learning was achieved.

When to Use Each Type

Formative evaluations take place as the learning is going on. An example of a formative evaluation is an end-of-lesson quiz.

Summative evaluations take place at the conclusion of the learning. An example of a summative evaluation is an end-of-course assessment. In an online learning environment, they are often required to be completed for learning to be considered complete.

In our course on business writing, an example of a formative evaluation might be a check-in quiz at the end of each lesson. These could include traditional question types such as multiple choice, or it could include more interactive assessments, such as composing an appropriate e-mail message. Feedback would be constructive in that it would point the learner to concepts in the lesson.

A summative evaluation might be the end-of-the-course assessment. This could be 20 questions utilizing traditional formats, such as multiple choice and fill-in-the-blank. The feedback would be minimal, and the score recorded in the organization's learning management system (LMS) for collection by the learning administrator.

Writing Assessment Questions

Writing assessment questions is an important part of the evaluation process since they are a common way to measure if goals and objectives have been achieved.

There are a number of most-often-used assessment question types:

  • Multiple Choice: The stem question, followed by answer choices. Only one is correct.
  • Multiple Response: The stem question, followed by answer choices. More than one answer is correct.
  • True/False Questions: The stem statement, which the learner marks as either a true or false statement.
  • Fill-in-the-Blank Questions: A statement, containing one or more blanks that should be filled in by the student.
  • Matching Items: Typically consist of two columns, where the student matches the items in the second column to match items in the first column.

The following are examples of these assessment question types.

Reviewing Formative versus Summative Evaluations

Duration: 15 to 20 minutes.

In this exercise, you will use your knowledge of formative and summative evaluations to answer the following questions.

Keep in mind this scenario from the previous lessons: You are working as an ID in your organization, and the head of human resources has come to your group with a specific need for the group to develop some training on working with difficult coworkers.

  1. You create an assessment test that contains 25 questions that is going to be given to learners after they complete the course to test their knowledge. Is this an example of a formative or summative evaluation?
  2. Using this information from the course, write an assessment question: As a first step in trying to deal with the coworker, it is often advisable to speak with him or her privately about the situation.
  3. At the end of lesson 1 of the course, you have added a three-question test your knowledge quiz that the facilitator can use to determine if he or she needs to go back over any of the previous material. Is this an example of a formative or summative evaluation?


  1. This is an example of a summative evaluation as it takes place at the end of the learning, assessing overall learning by students.
  2. Answers will vary but you could write a multiple choice question, such as: What is an often effective first step in dealing with a difficult coworkers?
    1. Speak to your manager about the issue at hand.
    2. Speak to the coworker privately.
    3. Ask another coworker if he or she has had issues with the difficult person.
  3. This is a formative evaluation, as it takes place during training and can be used by the instructor to make changes to learning as it is taking place.

Return on Investment (ROI)

Clients are often concerned with the training's return on investment (ROI). After investing money, time, and resources to train employees, it becomes important for stakeholders to know that the instruction has paid off.

Importance of Measuring ROI

Return on investment for clients can justify the money and resources spent on the training. In an era when companies are cutting costs, training is often an expense that must be justified to upper management to be approved. Therefore, organizations want to know that it will be worth the investment. They want a tangible way to measure results and see change.

For instructional design professionals, return on investment can equate to satisfied clients, which can in turn lead to repeat business.

How to Measure ROI

Measuring ROI can depend on the training need that was addressed. For example, if the training was to address an increase in customer complaint calls to a company, then to measure return on investment, the organization would determine how many complaint calls were coming in after training was completed.

If the goal of the course was to increase sales in an organization, a measurable return on investment would be increasing sales figures.

In our business writing course, measuring the ROI could first include assessing learners post-training. Can they now write an appropriate business communication? In other words, can they apply what they have learned, achieving the course's objective?

The original issue that precipitated the need for the learning was customer complaints. Have complaints decreased since the training was completed? This would indicate that value was achieved and the training was indeed successful.