QA Score, or Quality Assurance score, is your overall quality score that is derived from your cases and is based on the parameters defined within your scorecard. Whether this score is derived from AutoQA or by way of manual audits done by your team, the method remains the same.
There are different kinds of QA Scores that are available within Elevate. They are defined as below:
Calculating QA Scores:
While the method of calculation of QA score for Auto and Manual is the same, AutoQA has some pre-built conditionals and flexibility that allow for its differences with respect to Manual QA.
What remains the same, is how the Grades and Weights of a Behaviour interact with each other to provide you with your score (for said Behaviour). It is this score that is then summed up for the case/agent to get a final score, which is then summed up again to provide agg. score for your organization. Please refer to the below table to better understand how that works.
Taking a few of the above example - The final Behaviour score = Behaviour Weight * Scoring Multiplier
(b) Using the formula as above, we see that Probing has a max weight of 3 points and was provided a ‘Good’ grade with a multiplier of 0.75. Hence, its final score is 2.25 (ie only 75% of max weight). Note – The Grade names, Scoring Multipliers and Behaviour Weights are user defined and can be configured from the Scorecards page. This can be edit for both, manual and auto qa.
Now, once we have our Behaviour Scores, we simply sum them all to get the QA score for a particular case or for a particular agent.
Ie –
As mentioned initially, the above process is the same for AutoQA and ManualQA and the only difference lies in who is carrying out the scoring. When done automatically, by SupportLogic, it is recorded against your AutoQA score and when done manually, by your QA team members, then it is recorded against the ManualQA score.
Now, a pre-requisite to this would be to select the Scoring Level, while conducting a review (Manual QA). The Scoring Level of a review dictates how granular and deep you, as an auditor, would like to go while reviewing a ticket. There are 3 types:
Scoring Strictness with AutoQA The default scoring is Comment Level and hence this means that there could be multiple observations for the same Behaviour in a case. As such, this presents a new complexity – if there is 1 positive and 1 negative observation for the same Behaviour, how does that affect the score? Will they agent get marked down?
This is largely pre-defined and can be changed to reflect your choice at a later date as well, please reach out to us for the same. Currently, each Behaviour within the AutoQA scorecard has been set with its defaults for how strict they need to be, that we’d recommend. Here are a couple of strictness possibilities to choose from:
As a result, we know whether a Behaviour would have Positive or Negative contribution towards a ticket’s QA Score, based on the various different observations and grades that we see within a single case.
Lets put that into an example and see how scoring would work for different agents in the same case. Assuming that both behaviours – Grammar and Timing – have the first strictness config as set above, 1 Negative = Negative. This means that even if there’s 1 Negative observation, the overall contribution for that behaviour is Negative.
Agent 1’s scores are: Grammar = Positive = 2 marks Timing = Negative = 0 marks
Now, let’s look at Agent 2:
Agent 2’s scores are: Grammar = Positive = 0 marks Timing = Negative = 5 marks
As a result, assuming scores from all other behaviours are 100%, the final QA scores are: Agent 1’s QA score = 95 (100 less 5 for Timing) Agent 2’s QA score = 98 (100 less 2 for Grammar) Case QA score = 93 (100 less 2 for Grammar and 5 for Timing) |
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article