One year ago, while I was working for Monde Home Products, a major appliances distributor based in Vancouver, I was a member of the training team responsible for giving a comprehensive and educational learning experience about our brands and products. The attendees were staff members of our clients and they needed the necessary tools and knowledge in order to do their job — sell. A screen interface could have provided basic specs and knowledge about our products, but hands-on experience with using the appliances would have given them the opportunity to learn about their customers’ needs first-hand. Our team conducted some competitive research and designed a hands-on experience using live kitchen appliances to cook a marvellous lunch that they can enjoy themselves. The idea of this experience is to give the attendees a more immersive learning experience that is fun, memorable, and knowledgeable.
After a few sessions, we needed to justify the cost of providing these training sessions by measuring the value given to our users (attendees). If they walked out of here with more knowledge than they did before they walked in, and felt more confident in our brands, then it was well worth our time. Hence, I needed to set some objectives:
To get feedback and gauge the quality and effectiveness of each training session
To improve quality and effectiveness
To increase sales
The next step was to identify and choose the methods for collecting their feedback. We understood that they were busy people and interviewing them would be too time-consuming. Immersing ourselves in the context of their jobs would also take too much time and complicated logistics for our purpose. We needed to get immediate feedback and as many as we can since we got quite a bit of attendee weekly.
We had their emails beforehand, so we felt the best way was to send them a short survey using Survey Monkey the day after the training to allow some time to absorb the information and give us feedback while it’s still fresh in their minds.
The survey consisted of questions that asked for a rating on a scale in order to quickly get feedback on certain aspects of the training. The answers would later become quantitative data for us to quickly analyze the effectiveness and identify areas of opportunity and strength. We also included a few open-ended questions to get qualitative feedback and gain more insight into what worked well and what didn’t.
Not only were the phrasing and structure of the survey important so as to avoid leading and loaded questions, the appropriate environment for taking these surveys had to be considered. For example, we needed to make sure that the participants knew that their feedback was anonymous. For us, their names didn’t matter for the purpose of our study, and therefore, names were not collected in the surveys.
After numerous tweaks to our training based on the results, we took a look at the aggregate data after a year and found that the qualitative feedback supported the quantitative results from our surveys. This was a great overview of the quality of our training, and made it easy to identify what worked for us and what needed changes. Because roughly less than 5% of our attendees weren’t in sales, the results had a few outliers that we needed to discard because most of the training information during those particular sessions did not pertain to these individuals. This was found in the qualitative answers.
These were valuable for measuring against our objectives to improve quality and effectiveness of our training, but we were unable to use the data to measure the correlation between the quality of training and number of sales. This was because there were simply too many variables that can impact the sales performance of our clients. Any specific information attached to the number of sales was also private.
Through analysis of our quantitative and supporting qualitative data, we were able to identify areas that needed improvement and change.
Our biggest challenge was catering our training materials and allocating the time spent on each aspect of the training to our specific audience in every session. We wanted to create and deliver a fit-for-all training template. Each time we thought something worked for one group, we assumed it would work for the next group. That was a mistake. Some groups cared more about technical information, some groups cared more about similar brand comparisons, and more. We needed to create multiple templates that fit each group, which would lead to more spare time for Q&A, shorter time to digest the most relevant information, and more efficient use of our time.
I shared my findings with my colleagues and was able to make these recommendations to the CEO of our company. The feedback from the CEO was positive, encouraging, and provided me with the tools to make the necessary changes to improve the overall experience for our wonderful attendees.
While working to improve the user experience of our product training, I realized the need to not only collect data after each training session, but also conduct an analysis to see the effect of each change and improvement we make over time. This wasn’t evident in our study because while we do notice a change in score from our survey results, we couldn’t identify the actions we took that caused the change. Moving forward, instead of verbally communicating the cause and effect of our changes, we’ll look to collect that information and identify the possibility of a positive correlation between the change and feedback.
Monde Home Products
I share my process and methods for designing a better product training experience through user research. While this does not involve a digital interface, this was one of my earliest attempts to improve a service or product by means of research.
My goal was to provide training with a more immersive learning experience that is fun, memorable, and knowledgeable.