Google Dialogflow ES

Google Dialogflow ES is a third-party platform that provides voice virtual agents. Virtual agents interpret what your contacts say and respond appropriately. They do this using technologies such as:

Virtual agents are flexible and can provide a range of functions to suit the needs of your organization. For example, you can design your virtual agent to handle a few simple tasks or to serve as a complex interactive agent.

CXone supports using Google Dialogflow ES with voice channels only.

Dialogflow ES and CX are public offerings and you can purchase them directly through NICE CXone. However, the public version does not have full telephony features or native connections between Dialogflow and Google Contact Center AI Agent Assist. These features are available when purchasing through NICE CXone partners.

Comparison of Google Dialogflow ES and CX

CXone supports Google Dialogflow ES and CX. The two versions are similar, but have some key differences.

Dialogflow ES is suitable for small, simple bots. It simulates nonlinear conversation paths using a flat structure of intents and context as a guide. This approach doesn't support large or complex bots. You can pass contexts using the customPayload property of the Virtual Agent Hub Studio action used in your scripts. These bots use context data to determine the contact's intents.

Dialogflow CX supports complex, nonlinear conversational flow suitable for large, complex bots. It allows intentsClosed The meaning or purpose behind what a contact says/types; what the contact wants to communicate or accomplish to be reused and doesn't require contexts. You can pass customPayload data, but you don't need to include contexts.

Conversation Flow for Voice Virtual Agents

To start an interaction with a voice virtual agent, contactsClosed The person interacting with an agent, IVR, or bot in your contact center. call a phone number and reach your organization. The contact may be connected directly to the virtual agent, or they might need to choose an option in an IVRClosed Automated phone menu that allows callers to interact through voice commands, key inputs, or both, to obtain information, route an inbound voice call, or both. menu. Once the conversation with the virtual agent begins, the contact's utterancesClosed What a contact says or types. are transcribedClosed Also called STT, this process converts spoken language to text. into text so the virtual agent can analyze them. The virtual agent's responses are converted to synthesized speech using a text-to-speechClosed Allows users to enter recorded prompts as text and use a computer-generated voice to speak the content. service before being sent to the contact. Transcription and speech synthesis can happen in CXone or, in some cases, in the provider's platform.

After the conversation has started, the virtual agent analyzes the contact's utterances to understand the purpose or meaning behind what a person says. This is known as the contact's intent. When the intent is identified, the virtual agent sends an appropriate response to the contact.

Requests and responses are sent via Virtual Agent Hub and the script with each turn. This option allows for customization of the virtual agent's behavior from turn to turn. For voice virtual agents, this is the utterance-based method of connection. All text virtual agent providers use this method.

At the end of the conversation, the virtual agent sends a signal to the Studio script. It can signal that the conversation is complete, or that the contact needs to speak with a live agent. If the conversation is complete, the interaction ends. If a live agent is needed, the script makes the request. The contact is transferred to an agent when one is available.

Once the conversation is complete, post-interaction tasks can be performed, such as recording information in a CRMClosed Third-party systems that manage such things as contacts, sales information, support details, and case histories..

Prerequisites

To use Google Dialogflow ES virtual agentsClosed A software application that handles customer interactions in place of a live human agent. with CXone, you need:

  • A Google Cloud Platform account.
  • Google Dialogflow ES virtual agent configured and trained to provide responses to your contacts' requests.

Components of an Integration

The integration of Google Dialogflow ES involves the following components: 

Conversation Transcripts

You can capture the transcript and intent information from all Google Dialogflow ES voice conversations. You can use the captured data in any way you want. For example, in cases where an interaction is transferred to a live agent, you could display it for that agent. Another option could be to save it as a permanent record of the conversation. You can choose to capture just the transcript, just the intent information, both, or neither.

If you want to capture this information, you must enable it in the Google Dialogflow ES configuration settings in Virtual Agent Hub. You must also configure a Studio script used with your virtual agent. The script must include a action configured to manage the captured data. Captured data is stored temporarily for the life of the contact ID. If you need to save it, you can configure the script to send it to an archive. You are responsible for scrubbing all saved data for PII (Personally Identifiable Information).

Speech Context Hints

Speech context hints are words and phrases sent to the transcription service. They're helpful when there are words or phrases that need to be transcribed a certain way. Speech context hints can help improve the accuracy of speech recognition. For example, you can use them to improve the transcription of information such as address numbers or currency phrases.

If you want to use speech context hints, you must add them to your script. Dialogflow speech context hints are sent in the custom payload. You must include two parameters: 

  • speechContexts.phrases: The Google class token A square with an arrow pointing from the center to the upper right corner. for the hint you want to give. The token must match the language and locale of your contacts. If you want to add multiple tokens, add a speechContexts.phrases parameter for each token.
  • speechContexts.boost: A weighted numeric value between 1-20 to the specified phrase. The transcription service uses this value when selecting a possible transcription for words in the audio data. The higher the value, the greater the likelihood of the transcription service choosing that word or phrase from the alternatives.

For example:

DYNAMIC customPayload
customPayload.speechContexts.phrases="$OOV_CLASS_ALPHANUMERIC_SEQUENCE"
customPayload.speechContexts.boost=10		

You can see the contents of this parameter in Studio traces and application logs.

Custom Scripting Guidelines

Before integrating a virtual agentClosed The meaning or purpose behind what a contact says/types; what the contact wants to communicate or accomplish, you need to know: 

  • Which script you want to add a virtual agent to.
  • The virtual agent Studio action you need to use.

  • Where the Studio actions must be placed in your script flow.
  • The configuration requirements specific to the virtual agent you're using.
  • How to complete the script after adding the virtual agent action. You may need to: 
    • Add initialization snippets as needed to the script using Snippet actions. This is required if you want to customize your virtual agent's behavior.
    • Re-configure the action connectors to ensure proper contact flow and correct any potential errors.
    • Use the OnReturnControlToScript branch to handle hanging up or ending the interaction. If you use the Default branch to handle hanging up or ending an interaction, your script may not work as intended.
    • Complete any additional scripting and test the script.

Ensure that all parameters in the virtual agent actions you add to your script are configured to pass the correct data. The online help pages for the actions cover how to configure each parameter.

Additionally, ensure that you completely configure your virtual agent on the provider side. Verify that it's configured with all possible default messages. This includes error messages or messages indicating an intent has been fulfilled.

You may be able to obtain template scripts for use with virtual agent integrations from NICE CXone Expert Services. If you need assistance with scripting in Studio, contact your CXone Account Representative, see the Technical Reference Guide section in the online help, or visit the NICE CXone Community A square with an arrow pointing from the center toward the upper right corner. site.

Supported Action for Voice Virtual Agents

The Voicebot Exchange action is for complex virtual agents, or for when you need to customize the virtual agent's behavior from turn to turn. It monitors the conversation between the contact and the virtual agent turn by turn. It sends each utteranceClosed What a contact says or types. to the virtual agent. The virtual agent analyzes the utterance for intentClosed The meaning or purpose behind what a contact says/types; what the contact wants to communicate or accomplish and context and determines the response to give. The action passes the virtual agent's response to the contact. When the conversation is complete, the action continues the script.

If you want to configure barge in or no input, additional scripting is required.