Prompt Behaviors

You can customize some behaviors of voice-based virtual agentsClosed A software application that handles customer interactions in place of a live human agent. at each turn of the conversation. This includes things such as comfort noise, barge in, and timeouts.

You can customize the behavior of all turns in a conversation or just one turn:

  • All Conversation TurnsCreate a Default Next Prompt Behaviors snippet to have the script use the defined behaviors as the default for all turns during the conversation.
  • One Turn: If you want to specify a different set of behaviors for a particular turn during an interaction, create a Next Prompt Behaviors snippet. For example, during normal conversation turns, you might not want DTMF collection enabled. But if the virtual agent needs to prompt the contact to enter information, you can create a Snippet for that prompt that includes DTMF collection rules.

The behaviors described on this page can only be configured in Studio scripts that use Studio actions that have the nextPromptBehaviors property, such as Voicebot Exchange or Cloud Transcribe. Customization of these behaviors is done using Snippet actions with custom code. The code must be written in Snippet, an in-house scripting language used in Studio scripts.

For information and example code for each behavior, see the online help for the Next Prompt Behaviors snippet or the Default Next Prompt Behaviors snippet.

The behaviors described on this page also apply to non-virtual agent applications of Turn-by-Turn Transcription that use the Cloud Transcribe action.

Timeout and Silence Handling

You can configure options to handle silences on both sides of the conversation between a contact and a virtual agentClosed A software application that handles customer interactions in place of a live human agent.. Virtual agents encounter silence when the contact takes too long to respond or if there are pauses in the middle of an utterance. Contacts may experience silence if the virtual agent takes too long to respond.

Contact Timeouts

You can configure how long the virtual agent waits for the contact to respond in the following situations:

  • No Response Timeout: You can configure how long the virtual agent waits for the contact to begin speaking on each turn in the conversation. The timer for this parameter begins as soon as it's the contact's turn in the conversation. Use the silenceRules.millisecondsToWaitForUserResponse parameter to configure this setting.
  • Mid-Utterance Pauses: You can configure how long the virtual agent waits for the contact to continue speaking if they pause in the middle of an utteranceClosed What a contact says or types.. Use the utteranceConfig.maxPostEnergySilenceMS parameter to configure this setting.
  • Noisy Environments: If the contact is in a location with a lot of background noise, the virtual agent may have a hard time determining if the contact is speaking or when they're finished. This is a problem particularly when barge in is enabled. You can configure an additional timeout setting to give the virtual agent more time to determine what the contact is saying. See the Background Noise Handling section for more information.

  • Pauses between Entering DTMF Digits: If the script allows the contact to enter DTMFClosed Signaling tones that are generated when a user presses or taps a key on their telephone keypad. tones, you can set a timeout for how long the script waits between digits. Use the dtmfrules.interDigitTimeoutMilliseconds parameter to configure this setting.

Virtual Agent Delays

Sometimes the virtual agent may take longer to respond than expected. Contacts may assume their call was disconnected if the line is silent for too long. You can configure comfort noise to play in these situations. Comfort noise helps reassure the contact that the call is still active. You can use any WAV audio file to provide comfort noise.

The following properties allow you to manage virtual agent delays: 

  • Enable comfort noise: The silenceRules.engageComfortSequence parameter allows you to enable comfort noise.
  • Define comfort noise trigger: Use silenceRules.botResponseDelayTolerance parameter to specify how long in milliseconds you want the script to wait before starting the comfort noise sequence.
  • Provide the comfort audio file: You can use an existing audio file or create a new file using a Studio action that supports the Prompt Manager, such as Play. You can also use a third-party application to create a file and upload it. The files must be in WAV format. Create the comfort prompt sequenceClosed A segment of an audio prompt played for the contact. with the silenceRules.comfortPromptSequence.prompts[1].audioFilePath parameter.

Silence Handling

The following properties allow you to manage aspects of silence in virtual agent interactions: 

  • Comfort noise: As described in the Virtual Agent Delays section, comfort noise allows you to play an audio file when the virtual agent takes longer to respond that expected. The comfort noise lets the contact know that the call is still live.
  • Trim silence: You can have the silence at the beginning of each utterance removed before the audio is sent to the virtual agent. This reduces the size of the audio, which helps to prevent or reduce latency in the virtual agent's processing of each response. Use the silenceRules.trimSilence parameter to enable this feature. It's disabled by default.

Background Noise Handling

When a contact is speaking during a virtual agentClosed A software application that handles customer interactions in place of a live human agent. conversation, CXone waits to hear silence before sending the utteranceClosed What a contact says or types. to the virtual agent. If the contact is in a noisy environment, the virtual agent can have a hard time knowing if the contact is speaking or when they are finished. This can be an issue especially when barge in is enabled.

Background noise can cause the millisecondsToWaitForUserResponse parameter to trigger. If the background noise continues for long enough that the millisecondsToWaitForUserResponse timeout is reached, the script takes the userInputTimeout branch.

To prevent the virtual agent from treating situations like this as if the contact didn't respond, you can configure the utteranceConfig.maxUtteranceMilliseconds parameter in Prompt Behaviors snippets. The benefit of this setting is that if the timeout is reached, the script sends the captured audio to the virtual agent. The virtual agent interprets the audio as best it can and takes the most appropriate branch.

The timer for maxUtteranceMillisecondsstarts as soon as the virtual agent detects audio, whether it's the contact speaking or background noise. The maxUtteranceMilliseconds parameter cancels the millisecondsToWaitForUserResponse timer. This essentially extends the timeout limit by the length of the pause—or the amount of time between millisecondsToWaitForUserResponse and maxUtteranceMilliseconds. When the maxUtteranceMilliseconds limit is reached, the virtual agent attempts to determine the contact's intentClosed The meaning or purpose behind what a contact says/types; what the contact wants to communicate or accomplish from the captured audio. The script takes one of the following branches, based on what the virtual agent determines: 

  • Intent Found: If the virtual agent determines an intent, the script takes the PromptAndCollectNextResponse branch.

  • No Intent Found: If the virtual agent cannot determine an intent, the script takes either the UserInputNotUnderstood branch or the UserInputTimeout branch. It takes the most appropriate branch based on the situation.

Default Setting

There is no set default for maxUtteranceMilliseconds. The appropriate amount of time varies depending on the situation. If a short response such as yes or no is expected from the contact, setting maxUtteranceMilliseconds to 10 seconds is reasonable. Other responses, such as account numbers or addresses, may require more time.

DTMF Collection

You can configure your script to collect DTMFClosed Signaling tones that are generated when a user presses or taps a key on their telephone keypad. tones from the contact. Use the following parameters to configure DTMF collection: 

  • Enable DTMF detection: Set the audioCollectionRules.dtmfRules.detectDtmf parameter to true, then add parameters for the DTMF collection behaviors you want.
  • Clear the DTMF tone buffer: If you only want to collect tones entered after the script begins the action that prompts the contact, set audioCollectionRules.dtmfRules.clearDigits to true. This clears the buffer that caches DTMF tones when the contact presses keys on their phone keypad.
  • Require a termination character: If you want contacts to enter a character that indicates they're finished entering digits, include the audioCollectionRules.dtmfRules.terminationCharacter parameter and set the value as the character you want contacts to enter. For example, the pound sign (#) is commonly used a termination character.
  • Strip termination character: If you require a termination character, you can have the script remove the value of the termination character from the captured DTMF tones. When removed, the termination character is not processed by the script. Include the audioCollectionRules.dtmfRules.stripTerminator and set it to true to strip the termination character.
  • Configure a timeout between digits: You can configure a timeout to allow the script to handle situations when the contact takes a long time to enter the next digit. Include the audioCollectionRules.dtmfRules.interDigitTimeoutMilliseconds character and set it to the number of milliseconds you want the script to wait for the next digit.
  • Set a maximum number of digits accepted: You can configure the script to accept a maximum number of digits. If you require a termination character, include it in the number you use for this property. For example, if the prompt instructs the contact to enter an eight-digit account number plus a termination character, set audioCollectionRules.dtmfRules.maxDigits to 9.

Allow Contacts to Speak Over Script Audio (Barge)

Barge allows contacts to speak over the audio the script is playing. This includes audio responses from the virtual agentClosed A software application that handles customer interactions in place of a live human agent.. If you want to enable this option, set the audioCollectionRules.bargeConfiguration.enableSpeakerBarge parameter to true.

When you enable barge in, the virtual agent is especially sensitive to background noise. This can cause the virtual agent to have a hard time knowing if the contact is speaking or when they have finished speaking. To prevent the virtual agent from treating situations like this as if the contact didn't respond, you can configure the utteranceConfig.maxUtteranceMilliseconds parameter in Prompt Behaviors snippets.