eLearning Defects - Using quality metrics to build better elearning
How to measure quality and reduce elearning defects
Typically this question is answered by the suppliers ability to meet the design aesthetics specified in the training needs assessment or by the alignment of a particular complexity level for eLearning (number and complexity of interactions) with a specific delivery date and budget.
While these measurements or assessments might help in vendor selection they often leave the customer wondering"What will happen when time gets tight at the end of my project?" Will complexity and features be sacrificed as the supplier attempts to finish on time? Will quality suffer as the suppliers rush eLearning modules out the door? Worse still - will multiple versions of poor quality, ‘final’ elearning files result in re-work, re-test and re-publish. At this point the customer might be getting un-easy as the multiple iterations fail to leave time to confirm if the desired levels of interactions are in place or that the training effectively meets the learning objectives. It is easy to now see how evaluating suppliers only on, Price, Complexity and Schedule attainment leave a sizeable gap and area of risk.
In addition to the typical selection criteria; the measurement of quality, in the form of elearning defects per minute for completed eLearning provides a balanced view of a supplier’s ability to deliver engaging eLearning, at the specified level of interaction, on time and on budget. Not a common practice, nor a promise made by many learning and development/eLearning consulting companies.
At Matchstick, Inc. the tracking and recording of defects during all phases of testing and customer acceptance is a critical process. Going beyond defect capture, Matchstick measures rate of defects by comparing number of defects found over total minutes of completed eLearning. We do this to minimize the testing impact for our customers and ensure that the final product under review is fully functional and error free upon first and final delivery.
More specifically we hold ourselves accountable for quality by measuring the following metrics:
Defect leakage: Measurement of defects identified in Alpha release that are then found in the Beta release and consequently from Beta to Gold copy. The objective is to have a 0 defects leak from one delivery phase to the next.
Beta Defects %: Measurement of the number of new defects found in the Beta release as a percentage of minutes of eLearning. Our goal is to have fewer than 5% functional defects as a percentage of total curriculum storyboard slides. For example:
If a curriculum of four modules contains a total of 60 minutes and the customer discovers a total of 2 total defects in Beta/Acceptance testing Matchstick will have achieved a 3.3% defect rate.
(Total BETA defects)
______________ = % Beta Defects
(Total module minutes)
Matchstick consistently delivers this level of quality and in so doing we are able to provide our customers the confidence that their program will be delivered on time, on budget and to the standards of interaction, engagement and aesthetics outlined in the project agreements and statements of work.
Reach out to us to understand more. We will be happy to share our experiences and provide guidance whether you are just starting an eLearning/mLearning program or are already underway.