Check out the latest enhancements to the Console and Shift Pages - View the Updates

Don’t miss the improvements to Risk Signals, Agent Insights, Navigation, and Role Customization - See What’s New

QA Score

Modified on Fri, 9 May at 6:09 AM

QA Score, or Quality Assurance score, is your overall quality score that is derived from your cases and is based on the parameters defined within your scorecard. Whether this score is derived from AutoQA or by way of manual audits done by your team, the method remains the same. 

 

There are different kinds of QA Scores that are available within Elevate. They are defined as below:

 

  1. Overall QA – This is the total average of your AutoQA and ManualQA scores. This score is visible on the Dashboard when you do not select any scorecard. 
  2. Auto QA – The total score derived from your cases, when graded with the AutoQA scorecard.
  3. Manual QA – The total score derived from your cases, when graded with any of the manual scorecards that are created by you. This score only factors in one manual scorecard at a time. 
  4. Composite QA – This is a hybrid score that is derived based on both, your AutoQA scorecard and any one of your manual scorecards. The aim here is to provide you the best of both worlds.  
  5. Case QA – This is the total QA score for a single case (and all responders/agents within). 
  6. Agent QA – This is the QA score for a singular agent within the ticket, applicable when there are multiple agents on the same case, hence, each of the agents and the ticket could all have different scores. 

 

 

 

 

 

 Calculating QA Scores:

 

While the method of calculation of QA score for Auto and Manual is the same, AutoQA has some pre-built conditionals and flexibility that allow for its differences with respect to Manual QA. 

 

What remains the same, is how the Grades and Weights of a Behaviour interact with each other to provide you with your score (for said Behaviour). It is this score that is then summed up for the case/agent to get a final score, which is then summed up again to provide agg. score for your organization. Please refer to the below table to better understand how that works.

 

 

Behaviour Name

Weight

Grade

Scoring Multiplier

Result

Opening Greetings

2

Okay

0.5

1

Resolution

10

Best

1

10

Probing

3

Good

0.75

2.25

Appreciation

5

Fail

0

0

Timing

5

Bad

0

0

Introduction

2

Unobserved

0.5

1

 

Taking a few of the above example -
(a) We see that Resolution has a max weight of 10 points. The auditor has marked this Behaviour with Grade ‘Best’ that has a multiplier of 1. Hence, the score that it would get = 10 points, which is the max score. 

The final Behaviour score = Behaviour Weight * Scoring Multiplier

 

(b) Using the formula as above, we see that Probing has a max weight of 3 points and was provided a ‘Good’ grade with a multiplier of 0.75. Hence, its final score is 2.25 (ie only 75% of max weight). 

Note – The Grade names, Scoring Multipliers and Behaviour Weights are user defined and can be configured from the Scorecards page. This can be edit for both, manual and auto qa. 

 

Now, once we have our Behaviour Scores, we simply sum them all to get the QA score for a particular case or for a particular agent. 

 

Ie –

  • Agent QA Score = Sum of all Behaviour Scores for selected Agent (in the case of multiple agents);
  • Ticket QA Score = Agent QA Score = Sum of all Behaviour Scores (in the case of a single agent). 
  • Ticket QA Score = Sum of all Agent QA Scores (in the case of multiple agents)

 

As mentioned initially, the above process is the same for AutoQA and ManualQA and the only difference lies in who is carrying out the scoring. When done automatically, by SupportLogic, it is recorded against your AutoQA score and when done manually, by your QA team members, then it is recorded against the ManualQA score. 

 

Now, a pre-requisite to this would be to select the Scoring Level, while conducting a review (Manual QA). The Scoring Level of a review dictates how granular and deep you, as an auditor, would like to go while reviewing a ticket. There are 3 types:

 

  1. Comment Level – Each comment is scored individually. Ie There could be multiple observations of the same Behaviour within a case, as each comment could have one. This is the default setting for AutoQA and cannot be changed. It is not recommended for ManualQA. 
  2. Agent Level – Each agent is scored individually. This is recommended for ManualQA. 
  3. Ticket Level – Entire case is scores as one, almost as though there was only 1 agent. This is recommended for ManualQA


 

Scoring Strictness with AutoQA

The default scoring is Comment Level and hence this means that there could be multiple observations for the same Behaviour in a case. As such, this presents a new complexity – if there is 1 positive and 1 negative observation for the same Behaviour, how does that affect the score? Will they agent get marked down?

 

This is largely pre-defined and can be changed to reflect your choice at a later date as well, please reach out to us for the same. Currently, each Behaviour within the AutoQA scorecard has been set with its defaults for how strict they need to be, that we’d recommend. Here are a couple of strictness possibilities to choose from:

 

  1. 1 Negative = Negative
  2. 1 Positive = Positive
  3. Negatives > Positives = Negative
  4. Ratio

 

 

As a result, we know whether a Behaviour would have Positive or Negative contribution towards a ticket’s QA Score, based on the various different observations and grades that we see within a single case. 

 

Lets put that into an example and see how scoring would work for different agents in the same case. Assuming that both behaviours – Grammar and Timing – have the first strictness config as set above, 1 Negative = Negative. This means that even if there’s 1 Negative observation, the overall contribution for that behaviour is Negative. 

 

 

Agent 1

Weight

Grade

Scoring Multiplier

Result

Grammar

2

Postive

1

2

Timing

5

Negative

0

0

Grammar

2

Postive

1

2

Grammar

2

Postive

1

2

Timing

5

Postive

1

5

Grammar

2

Postive

1

2

 

Agent 1’s scores are:

Grammar = Positive = 2 marks

Timing = Negative = 0 marks

 

 

Now, let’s look at Agent 2:

 

Agent 2

Weight

Grade

Scoring Multiplier

Result

Grammar

2

Negative

1

0

Timing

5

Postive

0

5

Grammar

2

Negative

1

0

Grammar

2

Postive

1

2

Timing

5

Postive

1

5

Grammar

2

Negative

1

0

 

Agent 2’s scores are:

Grammar = Positive = 0 marks

Timing = Negative = 5 marks

 

 

As a result, assuming scores from all other behaviours are 100%, the final QA scores are:
 

Agent 1’s QA score = 95 (100 less 5 for Timing)

Agent 2’s QA score = 98 (100 less 2 for Grammar)

Case QA score = 93 (100 less 2 for Grammar and 5 for Timing)


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article