- Establish a strategic vision: Gain executive sponsorship and leverage their strategic vision to ensure that the quality metrics provide insights into greater key initiatives of your business. Outward leadership support also goes a long way in ensuring that the whole organization recognizes the value of your quality management program.
- Assess your current state of quality: Develop a deep understanding of how your current quality program works, including how agents, supervisors, and managers feel about the quality. Whether you’re starting from scratch or making changes to an existing quality program, there is typically some change management involved to ensure all parties involved – especially agents – understand that the purpose of the quality program to help them excel in their roles – not to act as ‘Big Brother’.
- Establish new quality guidelines: Assemble a Quality Council who represents peer ideas, provides positive feedback, and clearly communicates the quality message for new changes. This should include evaluators as well as other stakeholders, like agents, managers, and leaders. Engaging a pool of agents in this Quality Council will help create quality champions within your contact center and assist in change management and overall acceptance of the program.
- Support the strategic vision?
- Provide actionable and objective metrics and coaching points (SMART goals)?
- Drive positive business and customer satisfaction change?
- If “Yes” the form will proceed to question 2
- If “No” form will go to question 5
- Value-based scoring: You can assign a point-value to each question, and the questions can be summed, adding to a specific result. Conversely, you can start with a total and have values subtracted.
- Percentage-based scoring: Questions can also be calculated as a percentage and weighted by assigning a value to the question. Additionally, question sums can be combined with category percentages. Put weights to all the categories in your call scoring evaluation, with respect to their relevance and importance.
- Rank scoring: If you do not want agents to see the numerical score for their evaluations, then you can use ranking. For example, instead of a score, you could display text (such as Excellent or Poor) in the evaluation. Define a score range for each rank you want to display. For example: You define the ranges for a form from 90% -100% as Excellent, from 70% - 89% as Very Good, and from 10% - 29% as Poor. Based on this scale, an agent evaluation scoring 75% would have be marked as Very Good.
- Plan Frequency: Is this a one-time plan, or is this a plan that should repeat on a weekly, monthly, or quarterly basis? You may want to execute a one-time plan if you are just looking for quality insights on something specific that only takes place for a short period of time. For example, if you just launched a new product, you might want to evaluate each agent one time related to their ability to support calls about that new product. However, most quality plans repeat, as they tend to be more overarching and you want to continue to evaluate agents on an ongoing basis.
- Evaluation Form: What form should evaluators be using for the interactions in this plan? Good thing you created awesome forms already – associate it with the right quality plan!
- Sampling: How many interactions per agent should be evaluated for each cycle of the quality plan? This number can vary greatly based on the resources available in your quality management team, as well as how granular your quality plan is.
- Interaction Filtering:
- Interaction channel type – Do you want to evaluate only voice interactions? Or only interactions with screen recording? Or digital interactions? Or all the above!
- Interaction direction – Does it matter if the interactions evaluated are inbound or outbound? If so, specify!
- Interaction content type – Do you want to evaluate interactions pertaining to a specific topic, product, complaint? Then you may want to consider filtering your quality plans based on:
- ACD skills
- Interaction ACD disposition – For example, only calls where the disposition was “resulted in sale” or another business-specific outcome.
- Analytics categories – Advanced quality management software solutions, like CXone Quality Management Analytics Pro, are infused with analytics capabilities that empower more precise quality plans based on more unstructured, nuanced interaction data. For example, customer sentiment of the interaction, specific words and phrases mentioned in the interaction, call purpose or intent, and much more. This allows contact centers to pinpoint specific interactions to evaluate without having to search for them.
- Agent pool: Does this particular quality plan pertain to all agents in the contact center? Or is it specific for only a certain team? Most larger contact centers with specialized teams have different quality forms and plans relevant for each. For example, the quality plan used to evaluate interactions related to the Tech Support team may not be relevant for the Billing team.
- Evaluators: Again, depending on your quality management resources, you may have different evaluators tasked to different quality plans and teams.
Evaluations
When you create a solid quality plan using quality management software, you remove the time-consuming, tedious task of having to manually search for interactions and ensuring evaluators are assessing the right number and type of interactions per agent. Instead, evaluators have time to focus on the true task at hand – reviewing interactions and providing constructive agent feedback. The actual evaluation execution is quite straightforward, since the quality form and quality plan have really done a lot of the heavy lifting. Still, there are a few key things to consider:- Evaluator pool: Who are your evaluators? In large contact centers, there is usually a dedicated quality management team – but we can’t all be so lucky. In small and mid-sized contact centers, evaluators are often supervisors and managers playing double duty. In those scenarios, it is even more important that you employ effective quality management software to ensure form creation and quality plan execution is easy and automated – because these multi-hat evaluators have no time to waste chasing down form versions or wading through piles of interactions!
While it makes sense to have an agent’s direct supervisor provide quality feedback, we encourage you to also mix it up and have other supervisors and leaders execute evaluations on them as well. This helps promote the sense of collaboration across teams, as well as the feeling of fairness. It shouldn’t always be the same person evaluating them over and over, because you begin to run the risk of an agent feeling that he/she “just doesn’t like” them or “is out to get them.”
Also, to help remove some of the burden from supervisors and managers, consider giving more tenured agents a share of quality management responsibilities. This can be used as a great development and leadership opportunity for your agents, as well as provide them a taste of a potential future career path.
Even in larger contact centers, we often see the concept of a Quality Management “Task Force” where agents rotate through having some quality management responsibilities for a brief period. We have seen this not only lift workload off quality management resources, but also improve agents’ perceptions of the QM program and boost overall employee engagement. Naturally, there are additional factors to consider if you implement one of these programs in your contact center – i.e. Should peer evaluations count towards scores? Are there any HR considerations? etc. – but the concept is becoming more and more popular.
- Self-evaluations: As the saying goes, “We’re our own harshest critics” – which is a great reason why you should include self-evaluations into your quality management program if you don’t already. Self-assessments encourage employees to review and improve their own performance. It is an evaluation performed by the agent on their own interaction, using the same evaluation form that another evaluator would use. Self-evaluations can create real “aha” moments for agents. When they actually listen to themselves interact with the customer, they can better understand where both their evaluators and their customers are coming from with their feedback. We recommend having agents complete at minimum one self-evaluation per month. Quality management software can make it easy to route self-evaluations to agents, delivering it to them with the appropriate form right in the same agent interface they use to handle customer interactions!
CXone Quality Management Analytics Pro has taken this process even a step further with Collaborative Evaluations. A collaborative evaluation is an evaluation that is sent to an evaluator and to the agent at the same time. The agent performs a self-evaluation on their own interaction, while the evaluator evaluates the same interaction, using the same form. It then displays the results for each side-by-side so that an effective comparative review conversation can take place.
- Agent Acknowledgement: Once the evaluation is completed, it’s critical that the evaluation and accompanying feedback is reviewed by the agent. Back in the day, this might have included the evaluator walking by the agent’s desk and dropping off a completed evaluation (much like a teacher in grade school passing out the results of a quiz) – but, using quality management software, this can happen in a more automatic fashion. For example, the moment the evaluation is complete, the evaluation can route to the agent in their interaction-handling interface for their review. Upon their review, the agent could be required to acknowledge that it was reviewed by clicking a button.