Google Dialogflow CX

Google Dialogflow CX is a third-party platform that provides voice virtual agents. Virtual agents interpret what your contacts say and respond appropriately. They do this using technologies such as:

Virtual agents are flexible and can provide a range of functions to suit the needs of your organization. For example, you can design your virtual agent to handle a few simple tasks or to serve as a complex interactive agent.

CXone supports using Google Dialogflow CX with voice channels only. CXone supports utterance-based features with Google Dialogflow CX. Features that require audio streaming are not supported.

Dialogflow ES and CX are public offerings and you can purchase them directly through NICE CXone. However, the public version does not have full telephony features or native connections between Dialogflow and Google Contact Center AI Agent Assist. These features are available when purchasing through NICE CXone partners.

Comparison of Google Dialogflow CX and ES

CXone supports Google Dialogflow ES and CX. The two versions are similar, but have some key differences.

Dialogflow ES is suitable for small, simple bots. It simulates nonlinear conversation paths using a flat structure of intents and context as a guide. This approach doesn't support large or complex bots. You can pass contexts using the customPayload property of the Virtual Agent Hub Studio action used in your scripts. These bots use context data to determine the contact's intents.

Dialogflow CX supports complex, nonlinear conversational flow suitable for large, complex bots. It allows intentsClosed The meaning or purpose behind what a contact says/types; what the contact wants to communicate or accomplish to be reused and doesn't require contexts. You can pass customPayload data, but you don't need to include contexts.

Conversation Flow for Voice Virtual Agents

To start an interaction with a voice virtual agent, contactsClosed The person interacting with an agent, IVR, or bot in your contact center. call a phone number and reach your organization. The contact may be connected directly to the virtual agent, or they might need to choose an option in an IVRClosed Automated phone menu that allows callers to interact through voice commands, key inputs, or both, to obtain information, route an inbound voice call, or both. menu. Once the conversation with the virtual agent begins, the contact's utterancesClosed What a contact says or types. are transcribedClosed Also called STT, this process converts spoken language to text. into text so the virtual agent can analyze them. The virtual agent's responses are converted to synthesized speech using a text-to-speechClosed Allows users to enter recorded prompts as text and use a computer-generated voice to speak the content. service before being sent to the contact. Transcription and speech synthesis can happen in CXone or, in some cases, in the provider's platform.

After the conversation has started, the virtual agent analyzes the contact's utterances to understand the purpose or meaning behind what a person says. This is known as the contact's intent. When the intent is identified, the virtual agent sends an appropriate response to the contact. The method of sending and receiving requests differs depending on the method of connection the virtual agent uses: 

  • Utterance-basedRequests and responses are sent via Virtual Agent Hub and the script with each turn. This option allows for customization of the virtual agent's behavior from turn to turn.
  • SIP backchannel:Requests and responses are sent back and forth between the virtual agent and the contact. CXone stays connected to the virtual agent service throughout the conversation, but does not participate in it. CXone waits for the signal that the conversation is complete or that the contact needs transferred to a live agent.

At the end of the conversation, the virtual agent sends a signal to the Studio script. It can signal that the conversation is complete, or that the contact needs to speak with a live agent. If the conversation is complete, the interaction ends. If a live agent is needed, the script makes the request. The contact is transferred to an agent when one is available.

Once the conversation is complete, post-interaction tasks can be performed, such as recording information in a CRMClosed Third-party systems that manage such things as contacts, sales information, support details, and case histories..

Prerequisites

To use Google Dialogflow CX virtual agentsClosed A software application that handles customer interactions in place of a live human agent. with CXone, you need:

  • A Google Cloud Platform account.

  • A Google Dialogflow CX virtual agent configured and trained to provide responses to your contacts' requests. To complete integration in CXone, you need the virtual agent name from the virtual agent's settings in the Google Dialogflow CX console.

Alpha Visibility in Google

Alpha visibility is a Google program that provides Google Cloud Projects access to features that aren't otherwise available. Alpha visibility is not required to use Dialogflow CX with CXone. However, there is one case when you may need to have alpha visibility enabled.

Alpha visibility is needed to have the last user utterance returned from the Dialogflow virtual agent along with the intent information. You can view this information in a script trace. If the lastUserUtterance variable is empty when it should contain data, alpha visibility may not be enabled for your project. If you require this information, your Google Cloud project must have alpha visibility enabled.

Components of an Integration

The integration of Google Dialogflow CX involves the following components: 

Conversation Transcripts

You can capture the transcript and intent information from Google Dialogflow CX voice conversations. If you use a SIP backchannel connection with Dialogflow CX, this option is not available for you. You can use the captured data in any way you want. For example, in cases where an interaction is transferred to a live agent, you could display it for that agent. Another option could be to save it as a permanent record of the conversation. You can choose to capture just the transcript, just the intent information, both, or neither.

If you want to capture this information, you must enable it in the Google Dialogflow CX configuration settings in Virtual Agent Hub. You must also configure a Studio script used with your virtual agent. The script must include a action configured to manage the captured data. Captured data is stored temporarily for the life of the contact ID. If you need to save it, you can configure the script to send it to an archive. You are responsible for scrubbing all saved data for PII (Personally Identifiable Information).

Contact Center AI Insights

If you use Google Dialogflow Contact Center AI Insights, you need to make an additional configuration to your Studio script. The Contact Center AI Insights feature only works on conversations that have been marked complete.

By default, it takes 24 hours for Dialogflow CX virtual agent conversations to be marked complete. However, you can force them to close by sending an automated intent to Dialogflow at the end of each interaction.

To do this, you need to send the value conversation_complete through the AutomatedIntent property of the Voicebot Exchange action after the interaction has ended. You can hard-code this value in the property, or you can send it via a variable.

Speech Context Hints

Speech context hints are words and phrases sent to the transcription service. They're helpful when there are words or phrases that need to be transcribed a certain way. Speech context hints can help improve the accuracy of speech recognition. For example, you can use them to improve the transcription of information such as address numbers or currency phrases.

If you want to use speech context hints, you must add them to your script. Dialogflow speech context hints are sent in the custom payload. You must include two parameters: 

  • speechContexts.phrases: The Google class token A square with an arrow pointing from the center to the upper right corner. for the hint you want to give. The token must match the language and locale of your contacts. If you want to add multiple tokens, add a speechContexts.phrases parameter for each token.
  • speechContexts.boost: A weighted numeric value between 1-20 to the specified phrase. The transcription service uses this value when selecting a possible transcription for words in the audio data. The higher the value, the greater the likelihood of the transcription service choosing that word or phrase from the alternatives.

For example:

DYNAMIC customPayload
customPayload.speechContexts.phrases="$OOV_CLASS_ALPHANUMERIC_SEQUENCE"
customPayload.speechContexts.boost=10		

You can see the contents of this parameter in Studio traces and application logs.

Custom Scripting Guidelines

Before integrating a virtual agentClosed The meaning or purpose behind what a contact says/types; what the contact wants to communicate or accomplish, you need to know: 

  • Which script you want to add a virtual agent to.
  • The virtual agent Studio action you need to use.

  • Where the Studio actions must be placed in your script flow.
  • The configuration requirements specific to the virtual agent you're using.
  • How to complete the script after adding the virtual agent action. You may need to: 
    • Add initialization snippets as needed to the script using Snippet actions. This is required if you want to customize your virtual agent's behavior.
    • Re-configure the action connectors to ensure proper contact flow and correct any potential errors.
    • Use the OnReturnControlToScript branch to handle hanging up or ending the interaction. If you use the Default branch to handle hanging up or ending an interaction, your script may not work as intended.
    • Complete any additional scripting and test the script.

Ensure that all parameters in the virtual agent actions you add to your script are configured to pass the correct data. The online help pages for the actions cover how to configure each parameter.

Additionally, ensure that you completely configure your virtual agent on the provider side. Verify that it's configured with all possible default messages. This includes error messages or messages indicating an intent has been fulfilled.

You may be able to obtain template scripts for use with virtual agent integrations from NICE CXone Expert Services. If you need assistance with scripting in Studio, contact your CXone Account Representative, see the Technical Reference Guide section in the online help, or visit the NICE CXone Community A square with an arrow pointing from the center toward the upper right corner. site.

Studio Action for Voice Virtual Agents

The Voicebot Exchange action is for complex virtual agents, or for when you need to customize the virtual agent's behavior from turn to turn. It monitors the conversation between the contact and the virtual agent turn by turn. It sends each utteranceClosed What a contact says or types. to the virtual agent. The virtual agent analyzes the utterance for intentClosed The meaning or purpose behind what a contact says/types; what the contact wants to communicate or accomplish and context and determines the response to give. The action passes the virtual agent's response to the contact. When the conversation is complete, the action continues the script.

If you want to configure barge in or no input, additional scripting is required.