I get asked on occasion what the “right” support transaction survey looks like. I’ve been a part of many long discussions about this very topic within the companies I’ve served during my career. If you are not familiar with transaction surveys, they are a short, focused set of survey questions that you seek answers for from each customer at the conclusion of their support interaction. They can be used for many purposes including measuring the quality of the support delivered and the effect of various internal improvement initiatives on end customer impact.
My answer to the question of what’s best or right for transaction surveys is that there is no exact formula for transaction surveys. In fact, if you try to follow some standard or use someone else’s model, then you are definitely doing it wrong. If you are looking for a set of “best” questions that you can throw out there, your efforts are misplaced. That would indicate that your intent is merely to implement a transaction survey just to say you have one, to see what your customers are saying, or because an executive wanted it. In all of those cases, there will be no real, sustainable value from it. To be valuable, the survey needs to be completely integrated in how you run your support model and needs to be designed by you to match your unique situation.
So if there aren’t any standard questions, then what are the hallmarks of a good transaction survey? Based on my experience, I would say?
- It is focused and concise – You need to keep the number of questions at a minimum to ensure a reasonable response rate. A small number also increases the likelihood that the customer will take time to actually consider each question (versus running down the list and giving the same score for every item). I personally think 10 questions is the upper limit. Many times other functional areas within your company will want to tap into the survey. They will want to use this window into the customer to gather data about other topics. Don’t agree. It destroys the integrity of the support survey. While you might gain a bit of data in the short-term, in the long-term it closes or dirties the window. If marketing really wants to know “X” about your customers then they should use a proper marketing survey approach for that; not piggyback on the support survey.
- The questions at least cover the basic quality of the support interaction – you do need to cover how well you met the needs of your customer. In almost any industry that would include: overall satisfaction with the support provided and the final resolution of the problem, some measure of speed delivered versus their needs and expectations, and the ease of the support interaction. I am on the fence on asking about product satisfaction. My experience has been that the results are distorted due to the nature of the support interaction. There is quite a bit in the literature that shows that there can be both positive and negative influences. On the other hand, it is clearly related to the support incident and there may not be any other avenues to get real-time data back to the product R&D teams.
- It written from the customer perspective – asking questions that are primarily focused on subjects of interest to the internal needs of the support team will not be very helpful. The response rates will be lower, because the customers will care less, and the responses themselves will be less true to the customer’s experience. More than that, though this type of question always elicits a visceral response from me. I feel that we are asking customers to do our work in evaluating our employees. This after they just had to work to solve their issue. Any question that asks the customer to rate or evaluate an individual falls in this bucket for me. It is not their job to directly evaluate my team. It is my job to evaluate my team based on the feedback I get from them on the results they experienced. We can get that feedback from them by asking different questions. As an example:
- Bad survey question: “Please rate the agent on the level of training they exhibited.” Here we would be asking the customer to judge how internal support training was provided. How would they know? Are they experts on support training curriculum and effectiveness?
- Good survey question: “Rate us on how well we were able to quickly determine the cause of your problem and provide a comprehensive solution” I believe that this questions points directly to the internal training question – Are our agents at the right level of expertise? If so, then they will debug quickly and provide a solid answer. But it does so in a way that approaches it from the way the customer experiences it.
- It is a living document – If you are using the exact same survey that you used 3 years ago, then it is not an effective survey. I completely understand the want and desire to have a long track record of scores with which to compare performance. But if that becomes an unwillingness to change anything at all in the survey, it quickly becomes irrelevant to what you are doing today. For example, we made some changes in our approach to self-help and access to support, so replaced a less important question with a new one focused in this area. Note I said replace, not add – back to point one.
- You survey all transactions – If you leave certain classes of incoming requests out of the survey process, then you may suddenly have some increase in the number of those types of requests. They can easily become a dumping ground when agents have an interaction that went badly. When that happens they mark the request as an excluded type to try to avoid receiving a bad score. I actually think that this behavior is manageable in other ways as well, but that is probably the topic for another post. I’ve also seen another systems effect around this… We used to exclude a certain set of requests when the internal ownership was outside of our immediate support team. The argument was that we didn’t own the requests and couldn’t affect the outcome, so we shouldn’t be measured by them. But that just led to the team “giving up” on a whole class of interactions. And it fostered an attitude of – “I only own the pieces I can control” rather than “I own the customer’s success with the company.” It was restore ownership to all requests, but we actually found we could influence those other items through our actions and our communications with the customer. In addition, getting the data back directly from the customer was a lever to help drive internal company change in these related areas.
- You ask for praise for your employees – We have always used the survey as an input to our awards program. We do that in a very tailored way. We ask if they would like to nominate their agent for an award and then give them an input field to explain why. We got a very high rate of feedback this way. There is something about how their feedback causes an actual nomination event to occur that seems to spur a high level of response.