Over the weekend I was in a store looking for an item which was not in stock. I asked one of the staff in the store to look it up and order it so that I could pick it up later. The clerk had a noticeable grimace to which I asked “was there an issue?” they hesitated for a moment and replied “no, but this new system isn’t all that great so it might be faster or easier for you to go on our website and order it that way, if not there is always amazon.”
I was floored, as they basically suggested I go elsewhere rather than them trying to use their system. I asked them if the system wasn’t reliable to which they said “No, the system works, but since we implemented it we have found that we can’t get certain functions to work properly.” The clerk then went on to disclose that they were told that the system is working as designed and that they were not using it right, citing training issues. I now felt a sense of camaraderie with the clerk and asked how well they were trained. He indicated that they were not well trained as he was not the only clerk who was feeling this pain point. When they brought this up with the manager they were told on several occasions that the system was working as it should and that they should read up on the QRC (quick reference card) on the support site to get a better handle on this. The clerk then indicated that there qrc didn’t really speak to this particular issue as it was assumed at some point that this functionality was similar to the previous ordering application.
I thanked the clerk for their transparency, and wished them luck. I could tell that they felt bad that they not only couldn’t help me but were in a position that they had to turn away a customer.
Quite often there is a challenge when a new system is implemented regarding the user experience. Let’s face it some people are not good with change. But should issues like this been ruled out when user acceptance testing was completed?
In some cases, from a change management perspective, we might identify that the new application was signed off by users but we should ask if the users are in any way part of the deployment team or sustainment team. If they are they may have a sense of heightened familiarity with the product where standard questions may not be asked until the application is deployed.
I have found that like reporting, training is quite often low in the importance totem pole. The challenge then becomes how these training gaps will ultimately impact the user experience or even that of the customer as was the case in my experience.
There is a good chance that if the people using the application don’t understand a particular function they will call this a defect. When lists of these are brought to a project team they may review these and determine that some of these were signed off by testers and that that the application just works slightly different than its predecessor. The challenge is to ensure there is a component of training to compensate for these differences. While documentation is good it should complement a well-trained team to achieve excellent service delivery.
In absence of that your teams could simply look for ways to get around this functionality or in my case send the customer to a competitor.
Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn