Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Current »

Scorecard Breakdown

Before we look at the published Call Scoring Power BI app, let’s look at an example of the kind of scorecard a client may be using currently.

In this example, we can see some highlighted examples of subjective questions which if asked to a hundred different people, would give a hundred different responses.

It would be difficult to answer these questions with speech analytics as well, but we can do our best to break down subjective questions into smaller elements which can be measured objectively.

As opposed to the subjective approach, speech analytics takes the objective approach; each of the questions here is tied to a phrase library and the call either does or does not contain phrases from it.

Start Score indicates which direction scoring occurs. For example, the starting score for an agent is 0 until they identify themselves, at which point they earn 100 points for that question. Oppositely, an agent has 100 points until they use excessive filler words, at which point they lose all points for that question.

Minimum hits is the necessary number of detected phrases within a leaf-level category before the score is affected positively/negatively.

Red flag indicates whether the category is a high priority concern if detected. As seen here, when customers have negative feedback the call will be flagged for priority review.

The scorecard lists the questions which relate to the Word Bench leaf level categories; there are a total of 34 questions across four main categories.

Weights for each question are listed in the far- right column. Note the questions in the Call Scoring app are universal, not specific to any industry.

Now it’s time to open the published Call Scoring app and discover some insights!


Overview of Call Scoring

  1. The published Call Scoring application will break down by inbound and outbound calls.

  2. These apps draw data from the two QA Scoring Word Bench applications.

  3. It is important to ensure that the Word Bench apps are kept up to date; changes to the phrases within the app will require reprocessing to update the JSON files.

  1. Filters are across the top of the app; these apply across each app separately (ex. inbound filters apply only to the inbound app).

  1. The inbound and outbound apps each have three sections seen here.

  1. The Contact Center Performance Overall tab shows an overview of scores across the entire range of calls allowed by the above filters; it is meant to give an idea of overall performance over time.

  1. Hovering over sections of the app can display additional information. For example, we can see a performance drop since the previous day.

  1. The category tab breaks down the chart into four categories to allow for center-wide identification of opportunities.

  1. Individual categories can be displayed individually or together by holding the control key and selecting/de-selecting them.

The four categories here are the same as the ones seen in the Word Bench app. However, in Word Bench, there are more than four categories due to the deductions and communications categories being separated out.

10. The Agent Comparison tab allows you to drill further down to the agent level, giving you insight into how the score of an agent (or set of agents) compares to the average score on each question of the scorecard.

11. By default, all questions on the scorecard will show here, but control-selecting will allow you to display the questions from each category.

12. You can select one or more question at a time (by holding control and clicking) to view the scores for each agent.

13. Selecting questions on the left will change the average score calculated in this field.

14. Sorting is available by clicking on the top of the column.

15. You can also select one or more agents at a time (by holding control and clicking). Selecting these five agents affected the bar graphs on the left; it now shows the average of the selected agents in blue compared to the overall average of the call center.


Example Analysis (Training Opportunities)

  1. In this example, we see agents with below-average total scores.

  2. They also have below-average Communication scores.

  3. Most of the selected agents also have significantly lower-than-average Compliance scores.

4. The bar chart on the left displays how the selected agents perform compared to the average of their peers.

5. The largest opportunities in the Communication section for the selected agents are in these three scorecard questions; a focused training session might benefit these individuals.

Under the Compliance section, we can see that five of the six agents we identified before have training opportunities here as well.

6. To ensure these results are accurate, it is best practice to review a sample of calls from these agents before proceeding with a training plan.

We will be pulling calls for agents:

  1. 187

  2. 126

  3. 005

  4. 184

  5. 063

  6. 069

To do so, we will be navigating to the “Agent Calls of Interest” tab.

7. We can use the filters at the top of the screen to only show calls from these six agents. You can also sort the calls by Communication score from lowest to highest.

8. Now you can click the icons here to open the Word Bench calls in a new tab for review.

After we have reviewed calls and determined a training opportunity does in fact exist, we should suggest a training session be created.

Training sessions can be more productive when agents of varying skill levels are mixed into each training group.

The training manager could benefit from knowing who the top and bottom performers are so they can be mixed into activity groups.

9. To find the agents we would highlight as candidates to pair with the lower-performing agents, we would start by selecting the scorecard questions.

Doing this will change the calculated scores on the right.

We should also navigate to the Compliance section and control-select the three scorecard questions in there.

10. Once we have all the scorecard questions highlighted, we can easily see agents who perform well above average in Communication and/or Compliance ;these agents can be paired with the six low-performing agents in focused training.

There are various ways to conduct focused training; each has pros and cons, and it greatly depends on the call center.

Examples:

  • Side-by-side training (with an agent)

  • Small training groups

  • Remediation group training

  • One-on-one training (with leadership)


Example Recommendation (Training Opportunities)

In side-by-side training, agents sit by an experienced counterpart for a fixed amount of time per day until training completes.

Pros:

  • Listening to how high-performing agents handle calls can fill in the gaps of those with opportunities

  • Trainees can rotate through their side-by-side counterparts each day of training.

  • Empowers high-performing agents to develop their leadership skills for future management potential

  • Training can begin immediately since no training material needs to be created

  • Managers can commit their time to other tasks

Cons:

  • Training effectiveness varies greatly

  • Little to no control of training content

  • Productivity drops to less than half across each pair, since the trainee is not handling calls and the trainer is partially occupied with teaching

  • Agents may feel embarrassed that they are learning from their peers

In focused training groups, a Training Manager runs a group of 3 experienced agents and 3 lower-performing agents; the experienced agents bolster the training session by providing example situations and leading the team through exercises.

Pros:

  • Training managers can control what material is covered in the sessions

  • Experienced agents help teach or lead the class in the correct direction

  • Allows high-performing agents to develop their leadership skills for future management potential

  • It is not obvious to the agents who is considered a lower-performing agent and is less embarrassing

Cons:

  • Productivity is reduced to zero for all agents attending a training session

  • Classroom-type setting is not realistic to emulate live conversation even if realistic role-playing is involved

In Remediation Group Training, a Training Manager runs the session; agents who attend these sessions are all lower-performing agents who need remediation training immediately.

Pros:

  • Training manager can control what material is covered in the sessions

  • Fewer resources taken off the floor; only the lower-performing agents are taken off the floor, so productivity is not affected as much

  • More experienced agents not attending the session can help pick up the slack and/or help run teams while managers are running the training

Cons:

  • The training session may not be as effective without more experienced agents mixed into the class to provide example situations and talking points

  • Agents may feel like the training session is punishment for low performance

In One-on-One training, a Training Manager meets with each agent individually to cover their opportunities.

Pros:

  • Each training session is specific to that agent’s opportunities

  • Negative effect on agent productivity is minimized; only one agent is taken off the floor at a time.

  • One-on-One training is typically not noticed at all within a call center

  • Agents have the most opportunity to speak their mind and get their questions answered

  • KPIs can be covered in detail without risk of embarrassing an agent

Cons:

  • Requires the largest manager time commitment

  • Depending on the layout of the call center, a private setting may not be possible


Example Analysis

We have certainly discovered a training opportunity for a handful of agents, but we can provide more value in our analysis by investigating the feedback provided by customers.

To quickly identify calls of concern we will use the Agent Calls of Interest portion of the Call Scoring app. We immediately see three calls with negative feedback which we can review.

Once you have opened a call up for review, go to the Application Scores tab and locate the QA Scoring app.

Click on ”Customer Sentiment” to highlight the detected phrase in the transcript.

You may need to listen to portions of the call to get some context about what is going on.

Make sure you tag the call so you can see the feedback in the Word Bench search results.

Upon reviewing the calls, we can see there are issues with the physical phones as well as the service provided.

We can also review calls where the customer experience scores are lowest by sorting lowest to highest score.

After reviewing the calls, we can see there are additional issues with the physical phones and service; however, we cannot report that there is an issue until we get an idea how wide-spread this issue is.

We will not be manually reviewing every single call to find these negative customer experiences; instead, we will be conducting an ad-hoc search.

After a few phrases related to broken phones, we see a significant number of results (approximately 7% of all calls are about broken or not working).

We would also want to provide examples of these calls to the client, so tagging the calls as you review them is best practice and will save you time when you try to locate the calls again.


Standard Example Analysis

Most call centers require agents to read aloud a recording disclosure to the customer to ensure they are complying with state laws; although not all states require this, call centers usually take a preventative stance and make a blanket requirement for all outbound calls.

In one case, we identified that one of the outbound call center failed to provide recording disclosures on 15% of calls. This meant thousands of calls were being made, each of which could carry a potential fine of up to a $2,500.


Non - Standard Example Analysis

In another case involving a financial institution, we identified repeated confusion and frustration from customers mentioning a potential scam described by unknown charges showing up as “Vaccine Police.”

After conducting an ad-hoc search and creating a customized Word Bench application, we were able to identify a very slow, barely noticeable decrease of Vaccine Police being mentioned. When fraud/scam cases arise in financial institutions, they should be handled quickly, or they could get out of control.

Our feedback for that client was to strengthen the procedures for agents to report fraud/scam cases up the chain of command more quickly, and to utilize speech analytics to assist with investigation of new cases and progress of known ones.


Other Value Points

The value of speech analytics applies across many sections of an organization, not just the call center leadership.

If analysis reveals information that could do any of the following, it may be valuable to the client:

  • Save money

  • Reduce legal risk

  • Reduce employee turnover

  • Improve efficiency

  • Make money

  • Increase sales

  • Improve advertising

  • Improve product design

  • Make customers happier

  • Reduce customer attrition

  • No labels