Jump to content

Instructional design/User testing of e-learning courses/Analyzing the Data

From Wikiversity

Back to Topic: Instructional Design > User Testing of E-Learning Courses > Analyzing the Results


Reviewing the Results

[edit | edit source]

Once you've compiled your data, you should review each item to make sure that you understand the comment. In some cases, you might need to obtain clarification using several methods. These might include:

  • Circling back with the user to ask what he/she meant by the comment
  • Going into the course and trying to reproducing the error
  • In the case of software application training, verifying the way the application functions


Analysis and Action

[edit | edit source]

As you review, clarify, and verify each comment, you'll want to analyze reported problems and determine their severity. In other words, if there is a complaint or requested revision to the content, should you fix it? Based on a problem's severity, you'll be able to propose an action to take.


Here is one way to categorize your data from the highest to the lowest priority:

Deal Breaker: An example of a deal breaker is anything that provides incorrect information to the learner. You never want to convey wrong information. A deal breaker can also be a bug in the courseware that causes it to malfunction, or that gives the learner the perception that something is "broken." Another example might be a design element that creates universal frustration/confusion across the majority of testers. Even though the design made it through all the previous formative evaluations (i.e. Analysis/Design, Storyboard, Prototype, ALPHA), if the testers give it a resounding "thumbs down," you should probably think seriously of a redesign.


Serious: An example of serious comment is where a user makes a valid suggestion for revising the content, but not because the content is wrong. Rather, he/she makes a suggestion for improving its presentation as a Subject Matter Expert (SME). Deciding on an action for this type of comment is often mediated by budgetary and timeline constraints. In other words, what is the level of effort for this change and do you have the time and money to make it? If not, suggestions such as this should be saved for future revisions to the courseware.


Minor: An example of a minor problem is where a user makes a suggestion based on his/her personal preference. Maybe he/she thinks that the narrator reads the voice-over too fast, or comments that the material is too pendantic, or perhaps he/she would like to see a yellow highlight instead of a blue highlight. Unless you have a majority of users reporting the same thing, you can probably designate comments such as these as a "NO FIX." Again, if upon reviewing the user's comment against the course you find that you agree, you need to consider the level of effort for this change and whether you have the time and money.


N/A: You can designate a "not applicable" or "N/A" to feedback that is simply a comment (e.g. praising the course) or to reported problems that you could not reproduce.


Another important aspect of analysis is whether or not a proposed fix is a local or global issue. If you suspect the issue might occur in similar content, such as a popup box or interaction, you should do a full QA of the course to make sure that they are all caught and fixed.


Example

[edit | edit source]

Continuing with the scenario introduced in the previous topic, Organizing the Data, the following table shows the severity and action determined through analysis of the user test results. Note that a column has simply been added to the existing worksheet for this information.



Previous Next

Click Next to check your understanding.


Back to Topic: Instructional Design > User Testing of E-Learning Courses > Analyzing the Results