Dashboard (Home Page)
Filters allow you to analyze different departments of queues individually.
The Overview section shows the call volume and call statistics for the most recent day with calls by default.
You can hover your mouse over any day on the chart, and a pop-up will appear with the volume of calls for that day.
Selecting that day will display call statistics on the right.
APPLICATION Out-of-the-box and customized scoring programs will appear here; these can be created by anyone with access for different purposes as seen here.
Your Turn:
Select “Agent Scorecard” to navigate from the Dashboard to the Agent Scorecard application results page.
Application Results
Agent Scorecard is now in bold, indicating that you are currently looking at results from that application.
Results for latest day in the first category of the application will show by default and mouse-hovering over any day will display the pop-up for that day.
3. The Communication section is showing by default. According to this application, 88.3% of calls from 2-28 contained phrases from the Opening Statements sub-category.
4. Any category with an asterisk (*) is considered a “leaf-level.” Leaf-levels are the lowest level categories with no sub-categories, while categories without an asterisk have sub-categories.
✅ Your Turn:
“Rapport-Building” does not have an asterisk. Click on the bar to display the sub-categories of the “Rapport-Building” category.
5. The sub-categories here break down how rapport was built across the calls.
✅ Your Turn:
Return to the “Communication” section by clicking here:
6. Suppose you want to listen to a call where the agent displayed empathy. Mouse-hovering over the Empathy sub-category will show a pop-up; clicking the section will load those 23 calls into the bottom section.
✅ Your Turn:
Click the “Empathy” section bar and scroll down to view the lower section of this screen.
7. This lower section will default to the “Agents” view where you can see overall scores by agent. Since we are looking for the calls, we will use this drop-down to get to the “Files” view.
✅ Your Turn:
Click “Files” from the drop-down
8. The lower section will change to this view which contains the categories we saw in the previous section and the 23 Empathy calls on the right.
9. “Empathy” is already selected because we selected the Empathy category. (We will cover more about the options in this section in Module 5: ad-hoc Search.)
✅ Your Turn:
Select one of the files from the results section. (It may not be the same file as shown here)
Individual Call
The call will open in a new tab and the Application Scores section will automatically highlight the Empathy section in yellow. In this case, the application identified 5 phrases within the Empathy phrase library.
✅ Your Turn:
You can quickly locate the first of these with the blue arrows on the right. Select the down arrow.
2. The first of these 5 phrases in yellow are highlighted in the transcript with orange; you can use the arrows to continue locating the remaining phrases.
3. You can select multiple categories on the left and it will highlight them similarly.
✅ Your Turn:
Let’s find out more about this call by selecting the “File Details” tab.
4. You can play the call using the controls here, or you can play a specific part of the call by clicking on any of the speaker turns.
5. You can download the audio MP3 file, JSON file which contains all the call’s data including application results, or just the transcript.
6. The ”Skip Silence” option will speed up manual review of a call by skipping over portions of the call where dead air was detected.
7. You can add a tag to the call to bookmark it for later use or to categorize it. The tag does transfer into the database when you export data (we will cover this in Module 6).
8. The tag also appears in the search results of the Dashboard screen.
9. There is some basic information in this section, but it’s worth mentioning that the ”Request ID” is the same as in Genesys if you ever needed to reference the original audio recording.
✅ Your Turn:
The “Show Emotion” option will show you where the AI has identified positive and negative emotion. Turn on the “Show Emotion” option.
10. The three scores shown here are how the AI scored the call for each speaker and as a whole.
The GREEN BACKGROUND indicates the speaker turn was said in a generally positive tone while RED BACKGROUND indicates negative tone.
The green bold text color indicates the words are rated as generally positive while red bold text color indicates a generally negative word.
11. Silence is a calculation of how much dead airtime there is compared to speaking time; it indicated in the transcript with [SILENCE #] for the number of seconds of silence that occurred.
12. Overtalk is a calculation of how much the agent and client speak over each other compared to not doing so; it is indicated in the transcript with a blue line where the overtalk occurred.
13. Clarity is how clear the speakers can be understood and is greatly affected by the bit rate of the audio recording.