Built-in functions of DashaScript dialogue design

DashaScript has a number of built-in functions to give you intricate control over the interactions Dasha AI has with your users. The functions are designed in a way to let you create human-like conversational experiences for your users.

The built-in functions are inline and can be called from the executable section (sections do or transitions within a node or a digression). Built-in functions are identified by the # prefix.

node myNode { do { // Calling the "sayText" function. #sayText("Wake up, Neo. The Matrix has you."); #sayText("Do you want to follow the white rabbit?"); wait *; } transitions { // calling messageHasIntent function transition1: goto myNode2 on #messageHasIntent("yes"); transition2: goto myNode2 on #messageHasIntent("no"); } onexit { transition1: do { // calling #log function to console.log #log("He follows the white rabbit."); } transition2: do { // calling #log function to console.log #log("He does not follow the white rabbit."); } } }

Blocking calls

Blocking call (synchronous) functions block execution of further operations until the function is resolved. When it is resolved, the function returns the result and code execution continues.

GPT

answerWithGPT

Answer to the user using GPT with current history and ability to call DSL functions

Parameters

NameTypeDescription
promptstringA string with GPT prompt
promptNamestringA unique name for the prompt made for better response tracking
gptOptions{ model: string;
openai_apikey: string; }
model - name of model, for example openai/gpt-4, openai_apikey - your APIKEY for OpenAI (see gptOptions)
repeatMode?IRepeatModeControls repeating logic, (see IRepeatMode)
Default value: override
interruptible?booleanwhether the phrase could be interrupted.
Default value: false
args{[x:string]:string; }?Arguments that should be directly placed into the prompt using {{argName}} construction
sayOptions?{ fillerSpeed?:number?;
fillerTexts?: string[]?;
fillerDelay?: number?;
fillerStartDelay?: number?;
speed?: number?;
interruptDelay?: number?;
pauseOnVoice?: boolean?;
useGluing?: boolean?;
interruptConditions?:(InterruptCondition | InterruptCondition [] )[]
Overrides speed and emotion of saying text, delays before interrupting by human voice and interruptConditions (see SayOptions)

Returns

Returns an object with fields:

  • interrupted - true, if GPT was interrupted by user
  • functionCalled - true, if function was called during or instead of an answer
  • completed - true, if GPT has completed answering (and connection to the user was not closed)
  • saidPhrase - GPT’s phrase
  • calledFunctionNames - name of the function that was called (if none was called returns an empty array)
  • thinking - text that shows GPT’s thinking process prior to response generation (only when Thinking is enabled)

gptOptions

NameTypeDescription
ignore_json_outputbooleanIf true , ignores GPT’s responses that are formatted as a Json.
Responses formatted as a Json might lead to incorrect recognition by bot’s Text-to-Speech.
Default value is false.
allow_function_name_in_responsebooleanIf true , allows names of the functions in GPT’s responses. It helps in cases when GPT calls a function in its response.
If set to false - will skip any generated response that contains name of existing function (e.g. function name is “Date” - any response that contains the word “date” will be skipped).
Default value is true.
stop_sequencesstringIf a given sequence is generated - stops generating further response.
There can be up to 4 sequences. Different sequences are split by ; sign.
Default value is empty.
ignore_current_prependnumberDefines whether received prepend will be passed to GPT’s history.
Number must be an integer. If 0 then ignores - otherwise passes to the history.
Default value is 0.
top_pnumberDefines creativity of GPT’s response.
If set to 0.1, GPT will generate responses that fit top-10% of all probability mass.
It’s recommended to change this option OR temperature , but not both.
Number must be a float.
Default value is 0.5.
temperaturenumberDefines creativity of GPT’s response.
Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
It’s recommended to change this option OR top_p, but not both.
Number must be a float.
Default value is 0.5.
max_tokensnumberThe maximum number of tokens allowed for the generated answer.
Creating a limit with this parameter might lead to only a partial answer.
Number must be an integer.
Default is set to a full response.
presence_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Number must be a float.
Default value is 0.
frequency_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Number must be a float.
Default value is 0.
prependstringYour input phrase that GPT will begin with before sending a generated response.
Default value is empty.
function_callstringControls how the model responds to function calls. none means the model doesn't call a function, and responds to the end-user. auto means the model can pick between an end-user or calling a function.
You can also put in a function name. In that case, only the function that you put in will be called.
Default value is auto .
openai_apikeystringHere you put in your GPT’s APIKEY.
openai_endpointstringHere you put in your GPT’s endpoint.
For example, you can change it to Azure's-endpoint (if you plan to use Azure OpenAI services).
Default is OpenAI’s endpoint.
sort_historystringThis option allows to sort answerWithGPT function history by time, so the transcription could be read.
We don’t recommend to change it.
Default value is true .
merge_historystringThis option allows to merge different phrases in one within same turn, allowing coherent transcription.
We don’t recommend to change it.
Default value is true .
history_lengthnumberShows how many phrases from bot or user will be passed as a history to GPT.
By default full history is passed.
log_historystringDefines will the history be passed to the debug log.
Default value is false.
max_retriesnumberMax number of retries that GPT will be called within the same turn.
answerWithGPT retry happens in two cases: 1 - when user says anything while GPT is generating a response. In this case the retry will contain in a history a new phrase from user. 2 - with Thinking enabled, retry will happen if generated response doesn’t fit the required thinking format.
Number must be an integer.
Default value is 1.
retry_temperature_scalenumberIncreases temperature on each retry.
Default value is 2.0.
retry_topp_scalenumberIncreases top_p on each retry.
Default value is 2.0.
save_response_in_historybooleanEnables/disables saving function for call requests and responses in GPT history. Can be used for service-related requests that are not a part of the conversation.
Default value is true
provided_functionsstring[]Shows which functions from the current context could be used for a request
All functions in the current context are used by default

Say Options GPT

NameTypeDescription
fillerTextsstring[]An array of strings that will be used as fillers. (see Fillers)
fillerSpeednumberDefines speed of a synthesized filler phrase.
Default value as in phrasemap
fillerDelaynumberDefines time in sec that is waited after previous filler phrase.
Default value is 2
fillerStartDelaynumberDefines time in sec which needed for GPT to give an answer.
Default value is 2
speednumberDefines speed of a synthesized phrase.
Default value as in phrasemap
interruptDelaynumberDefines time in sec which needed to interrupt pronouncing phrase by human voice.
Default value is 2
useGluingbooleanIf true, composite phrasemap phrases will be concatenated before synthesizing.
Default value is true (recommended)
interruptConditions(InterruptCondition|
InterruptCondition[])[] 
Defines triggers in user's utterance to interrupt currently pronouncing phrase.
pauseOnVoiceboolean?Overrides current pauseOnVoice for this call, true for pausing current phrase pronunciation, if the human starts to speak.
Default value: null - means that current pauseOnVoice configuration will be used from connectOptions

Example

node gpt { do { // We will be here when user says something, or retry is required var a = #answerWithGPT(`Your name is Mary. You are working in fruit seller contact center. You can only tell customer a price of requested fruit. If you have no price for fruit tell: We have no such fruit `, interruptible:true, gptOptions: { model:"openai/gpt-4", openai_apikey: "YOUR_APIKEY" }, sayOptions: { interruptDelay: 1.0 }); // Call answerWithGPT one more time for passing result to the GPT if (a.functionCalled) { #log("Called a function, retry"); goto retry; } wait *; } transitions { gpt: goto gpt on true; retry: goto gpt; } }

askGPT

Ask GPT to help and form answer to a user around it.

Parameters

NameTypeDescription
promptstringA string with GPT prompt
gptOptions{ model: string;
openai_apikey: string; }
model - name of a model (e.g. openai/gpt-4);
openai_apikey - your APIKEY for OpenAI (see gptOptions)
args{[x:string]:string; }?Arguments that should be directly placed into the prompt using {{argName}} construction

Returns

  • functionCalled - true, if function was called during or instead of GPT’s response
  • completed - must always return true
  • responseText - returns GPT’s response for the question
  • calledFunctionNames - name of the function that was called (if none was called returns an empty array)
  • thinking - text that shows GPT’s thinking process prior to response generation

Fillers

Fillers are small texts that are being said by robot to create a display of presence and activity in dialogue. Fillers are very helpful when GPT takes longer time to generate a response.

Fillers are turned on by adding preferred phrases in fillerTexts . It should look like this.

var a = #answerWithGPT($prompt, interruptible:true, gptOptions: { model:"openai/gpt-4" }, sayOptions: { interruptDelay: 1.0, fillerTexts: [ "Okay...", "Uhm...", "Hm...", ], fillerSpeed: 1.0 });

Thinking

Thinking is a feature, aimed at perfecting GPT’s following to the requirements of the prompt, and potentially checking on which stage of the dialogue did GPT fail through debug log.

You can see thinking process with the Inspector tool on our platform. It will look something like this.


Thinking parameters can be changed by `gptOptions` in `askGPT` or `answerWithGPT` functions.

Parameters

NameTypeDescription
thinking_enabledbooleanIf true , turns on thinking feature.
Default value is false.
thinking_choicesnumberDefines how many different variations of a response GPT is allowed to generate.
The best generated response will be used.
Number must be an integer.
Default value is 1.
thinking_fallback_enabledbooleanIf true, allows to use generated responses from GPT that don’t fit the thinking format (in cases where none of the choices fit it).
Thus, answerWithGPT function will be able to give some kind of response to the user.
Default value is false.
thinking_empty_enabledbooleanAllows empty thinking messages.
Default value is false.
thinking_multi_answer_enabledbooleanAllows thinking-reply-thinking… chain within the same turn.
Default value is false.

Session control

connect (blocking call)

Attempts to establish a connection (starts a call) and returns the result of the attempt. Blocks execution until the result of the connection attempt is received.

#connect(endpoint, options);

Parameters

NameTypeDescription
endpointstringendpoint to connect to
optionsConnectOptionssee Connect Options

Returns

object

Object depends on connection attempt outcome.

On success: returns an object with the following fields:

NameTypeDescription
msgIdstring literalcontains value "OpenedSessionChannelMessage"
endpointstringendpoint with which a connection is established

On failure: returns an object with the following fields:

NameTypeDescription
msgIdstring literalcontains value "FailedOpenSessionChannelMessage"
reasonstringthe reason why the connection could not be established
detailsstringadditional information

connectSafe (blocking call)

Establishes a connection (starts a call) and returns the result of the attempt. If the connection failed, it stops script execution with the error "Connection failed". Blocks execution until the result of the connection attempt is received.

#connectSafe(endpoint, options);

Parameters

NameTypeDescription
endpointstringendpoint to connect to
optionsConnectOptionssee Connect Options

Returns

object

Object with the following fields:

NameTypeDescription
messageIConnectionResultmessage received in response to a connection request

IConnectionResult is an object with the following fields:

NameTypeDescription
msgIdstring literalcontains value "OpenedSessionChannelMessage"
endpointstringendpoint with which a connection is established

connectOptions

NameTypeDescription
humanNamestringName of the human for transcription #getTranscription, #getFormattedTranscription For example agent in a warm transfer.
Default: human
sip_fromUserstringOverrides SIP FROM user part field (DID) for an outbound call
sip_serverstringOverrides SIP target for an outbound call
sip_domainstringOverrides SIP domain for an outbound call
sip_displayNamestringOverrides Dispaly name for an outbound call
sip_authUserstringOverrides username for SIP Authentification
sip_authPasswordstringOverrides SIP password
sip_transport"tcp" or "udp"Overrides trasport protocol for SIP
cache_tts_before_connectbooleanFalse will disable TTS caching of the phrasemap. It might reduce time of picking up inbound call.
Default value: true
pauseOnVoicestringTrue for pausing current phrase pronunciation, if the human starts to speak.
Default value: false

forward

Emits SIP Refer with Refer-To username equals to the endpoint for SIP-based channels. Nothing for a text-based channel (chat).

#forward(endpoint);

Parameters

NameTypeDescription
endpointstringendpoint to connect to

Returns

boolean

Always true.

disconnect (blocking call)

Closes the connection (ends the call). Blocks execution until the result of the attempt to close the connection is received.

#disconnect();

Returns

object

Object with the following fields:

NameTypeDescription
hangupMsgIdstringthe identifier of the message indicating that the connection was closed

disableRecognition

Disable speach-to-text (STT) and natural-language-understanding (NLU) in current block or main context (the main graph is implicitly a block too.).

Note that recognition disabling interrupts the calculation of duration of the current session.

#disableRecognition();

Returns

boolean

Always true.

enableRecognition

Enables speach-to-text (STT) and natural-language-understanding (NLU) in current block or main context (the main graph is implicitly a block too.).

Note that enabling of recognition turns on the calculation of duration of the current session.

#enableRecognition();

Returns

boolean

Always true.

NLG Control

say (blocking call)

Sends a command to say a phrase from connected phrasemap.

Blocks control until the phrase had been said (or interrupted).

#say(phraseId, args, repeatMode, interruptible, options);

Parameters

NameTypeDescription
phraseIdstringphrasemap key of the phrase to be spoken
args?{[x:string]:unknown;}dynamic phrase arguments that will be used to construct phrase (see NLG doc)
repeatMode?IRepeatModecontrol repeating logic, (see IRepeatMode) default value: override
interruptible?booleanwhether the phrase could be interrupted, default value: false
options?{speed?: number; emotion?: string; interruptDelay?: number; interruptConditions?: (InterruptCondition|InterruptCondition[])[]`` }Override speed and emotion of saying text, delay before interrupting by human voice and interruptConditions (see Say Options)

Returns

boolean

true if the phrase was said completely, false if it was interrupted.

sayChanneled (blocking call)

#sayChanneled(phraseId, args, channelIds, options);

Channeled version of the builtin function #say.

Similarly to the #say it sends a command to pronounce a phrase from a phrasemap.

Unlike the #say function, #sayChanneled allows you to pronounce phrase in the specific channels provided by channelIds parameter. If channelIds is not provided or provided as an empty array, then the phrase will be pronounced in all channels. It does not affect phrase buffer, so it does not have repeatMode parameter.

Blocks control until the phrase is spoken.

Parameters

NameTypeDescription
phraseIdstringPhrasemap key of the phrase to be spoken
args?{[x:string]:unknown;}Dynamic phrase arguments that will be used to construct phrase (see NLG doc)
channelIds?string[]Channel ids send command to pronounce to. If is not provided or is empty array, the command will be sent to all available channels
options?{ speed?: number; emotion?: string; useGluing?: boolean; }Override speed and emotion of saying text (see Say Options)

Returns

boolean

Always true as it can not be interrupted.

sayText (blocking call)

Sends a command to say the text. Differs from say in that the text parameter is interpreted literally rather than being resolved via phrasemap.

Blocks control until the phrase had been said (or interrupted).

Warning: it is not recommended to use this function in production.

More details

This function does not require phrasemap at the cost of no static analysis is performed of whether the phrase is available (in case of prerecorded speech) and no pre-synthesis is performed (in case of synthesized speech) which may cause delays at runtime as well as no caching will be used which cause re-synthesizing on each run.

Thus, this function may be used for demonstration purposes allowing the script to be as simple as possible, but it in general it is not recommended to use it in production cases.

#sayText(text, repeatMode, interruptible, options);

Parameters

NameTypeDescription
textstringtext to say
repeatMode?IRepeatModecontrol repeating logic, (see IRepeatMode) default value: override
interruptible?booleanwhether the phrase could be interrupted, default value: false
options?{ speed?: number; emotion?: string; interruptDelay?: number; interruptConditions?: (InterruptCondition|InterruptCondition[])[]`` }Control speed, emotion of saying text, delay before interrupting by human voice and interruptConditions (see Say Options)

Returns

boolean

true if the phrase was completely said, false if it was interrupted.

Examples

// just say some text #sayText("Hello, how are you?"); // ---------- // say this text (will not be repeated if you call #repeat #sayText("Hello, how are you?", "ignore"); // ---------- #sayText("Hello, how are you?"); // add to phrase buffer. #sayText("Would you like some coffee?", "complement"); // Call of the #repeat will say "Hello, how are you? Would you like some coffee?" // ---------- // this phrase can be interrupted by user, if the one says something // during `options.interruptDelay` sec (default value is 2 sec) #sayText("Hello, how are you?", interruptible: true); // ---------- // say this phrase fast #sayText("Hello, how are you?", options: { speed: 1.5 }); // ---------- // say this phrase with emotion extracted from text "I love you" // NOTE: you need to enable emotional dasha tts with `conv.audio.tts = "dasha-emotional"; #sayText("Hello, how are you?", options: { emotion: "from text: I love you" }); // ---------- // say this phrase with emotion extracted from text "I love you" and do it slow #sayText("Hello, how are you?", options: { emotion: "from text: I love you", speed: 0.7 }); // ---------- // build phrase from variable #sayText("Hello" + $name + ", how are you?"); // ---------- // this phrase can be interrupted by intent "wait" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ intent: "wait" }] } );

sayTextChanneled (blocking call)

Channeled version of the builtin function #sayText.

Similarly to the #sayText it sends a command to pronounce a phrase from a phrasemap.

Unlike the #sayText function, #sayTextChanneled allows you to pronounce phrase in the specific channels provided by channelIds parameter. If channelIds is not provided or provided as an empty array, then the phrase will be pronounced in all channels. It does not affect phrase buffer, so it does not have repeatMode parameter. Also, it cannot be interrupted.

Blocks control until the phrase is spoken.

NameTypeDescription
textstringText to say
channelIds?string[]Channel ids send command to pronounce to. If is not provided or is empty array, the command will be sent to all available channels
options?{ speed?: number; emotion?: string; }Control speed and emotion of saying text (see Say Options)

Returns

boolean

Always true as it can not be interrupted.

repeat (blocking call)

Sends a command to repeat phrases from the phrase buffer (see repeatMode argument of say or sayText).

Due to the variability of phrases (which set in phrasemap) calling the #repeat() may enliven a dialogue and help the dialogue sound more human-like.

Blocks control until the phrase had been said (or interrupted).

#repeat(accuracy, interruptible, options);

Parameters

NameTypeDescription
accuracy?IRepeatAccuracyPhrasemap sub-key to use (default value: "repeat")
interruptible?booleanWhether the phrase could be interrupted, default value: false
options?{ interruptDelay?: number; }Control delay before interrupt (see Say Options)

IRepeatAccuracy is a enum of string literals that denote which phrasemap sub-key will be used to pronounce:

  • first
  • repeat
  • short

Returns

boolean

true if the phrase was completely said, false if it was interrupted.

Phrase buffer

The phrase buffer is a system buffer used to accumulate phrases that you may pronounce again to repeat them to a user.

Use builtin function #repeat() to repeat phrases accumulated in buffer.

Due to the variability of phrases (which set in phrasemap) calling the #repeat() may enliven a dialogue and help the dialogue sound more human-like.

The content of phrase buffer is controlled by repeatMode argument used to invoke phrases pronunciation (see IRepeatMode).

IRepeatMode

IRepeatMode is a enum of string literals, which control phrase buffer update logic:

  • override: override phrase buffer. If the phrase is pronounced with repeatMode:"override", the buffer will be cleaned and the phrase will be appended to empty buffer.
  • complement: append to the phrase buffer. The phrase buffer will not be cleaned and the current phrase will be appended to the buffer.
  • ignore: do not change the phrase buffer. The phrase buffer will not be cleaned and the current phrase will not be appended to the buffer.

Example

Suppose, you have the following phrasemap content:

{ "default": { "voiceInfo": { "lang": "en-US", "speaker": "default" }, "phrases": { "how_are_you": { "first": [{ "text": "How are you doing?" }], "repeat": [{ "text": "How are you today?" }] }, "i_said": [{ "text": "I was saying..." }] } } }

... and the following DSL code:

#sayText("Looks like the weather is good today!"); // default repeatMode is "override" // buffer: ["Looks like the weather is good today!"] #sayText("I am wondering...", repeatMode: "override"); // override buffer content // buffer: ["I am wondering..."] #say("how_are_you", repeatMode: "complement"); // buffer: ["I am wondering...", "how_are_you"] /** ... suppose some dialogue happens here ... */ #say("i_said", repeatMode: "ignore"); // buffer stays the same // buffer: ["I am wondering...", "how_are_you"] #repeat();

, then the last #repeat will actually trigger the following text to be pronounced by Dasha:

AI: "I am wondering..." AI: "How are you today?"

So, the current buffer content was pronounced and (where possible) the "repeat" versions of phrases were used.

preparePhrase

Sends a command to prepare a phrase.

If phrase specified in phrasemap is static (i.e. it does not have any arguments) it is prepared automatically prepared before dialog begins.

Otherwise, if phrase is dynamic and requires some arguments, it can not be prepared in advance and since that it is prepared when calling #say function.

The preparation process requires some time, so if unprepared phrase is being prepared in runtime, it may cause lags in dialogue.

To avoid this you may use the #preparePhrase function before the actual dialogue is started to force prepare some dynamic phrase with particular arguments.

The parameters phraseId and options are the same as for #say function.

Parameters

NameTypeDescription
phraseIdstringphrasemap key of the phrase to be spoken
options{emotion?: string; speed?: number;}Override speed and emotion of saying text (see Say Options)

Returns

boolean

Always true.

Say Options

NOTE: Different NLG control functions consume different options. See the description of the desired function.

NameTypeDescription
speednumberDefines speed of a synthesized phrase.
Default value as in phrasemap
interruptDelaynumberDefines time in sec which needed to interrupt pronouncing phrase by human voice.
Default value is 2
useGluingbooleanIf true, composite phrasemap phrases will be concatenated before synthesizing.
Default value is true (recommended)
interruptConditions(InterruptCondition|
InterruptCondition[])[] 
Defines triggers in user's utterance to interrupt currently pronouncing phrase.
pauseOnVoiceboolean?Overrides current pauseOnVoice for this call, true for pausing current phrase pronunciation, if the human starts to speak.
Default value: null - means that current pauseOnVoice configuration will be used from connectOptions

NOTE: If interruptDelay set to some specific value, it will be consider as additional trigger. Otherwise, the default behaviour (interrupting by 2-second-long utterance) will be ignored.

type InterruptCondition = // triggers when utterance has positive intention { sentiment: "positive"|"negative"; } | // triggers when utterance contains specific entity { entity: string; value?: string; tag?: string; } | // triggers when utterance contains specific intent { intent: string; sentiment?: "positive"|"negative"; } | // triggers when utterance contains specific intent (same as { intent: string }) string;

Examples

// this phrase can be interrupted if user's utterance contains negative intention, // e.g. "no no no no" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ sentiment: "negative" }] } ); // ---------- // this phrase can be interrupted by intent "wait" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ intent: "wait" }] } ); // same as above #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: "wait" } ); // ---------- // this phrase can be interrupted by entity "fruit" of particular value "apple" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ entity: "fruit", value: "apple" }] } ); // ---------- // this phrase can be interrupted by // EITHER entity "fruit" (of any value) and intent "wait" // OR entity "fruit" of particular value "apple" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [ [{ entity: "fruit" }, { intent: "wait" }], { entity: "fruit", value: "apple" } ] } );

NLU Control

messageHasIntent

Checks if the phrase being processed contains the specified intent.

#messageHasIntent(intent, state);

Parameters

NameTypeDescription
intentstringthe name of the intent being checked
statestringpolarity state, default value: positive

State defines intent polarity. It works only for intent that allows it. Read here how to create custom polar intents.

Possible state values:

  • positive
  • negative

Returns

boolean

true if the phrase contains the specified intent, otherwise it returns false.

Example

digression hello { conditions { on #messageHasIntent("hello"); } do { } }
node can_i_help { do { #sayText("How can I help?"); wait *; } transitions { transfer_money: goto transfer_money on #messageHasIntent("transfer_money"); } }
node transfer_money_confirm { do { #sayText("Do you confirm money transfer?") wait *; } transitions { positive: goto process_transfer on #messageHasIntent("agreement", "positive"); negative: goto transfer_money on #messageHasIntent("agreement", "negative"); }

messageHasAnyIntent

Checks if the phrase being processed contains any of the specified intents.

#messageHasAnyIntent(intents);

Parameters

NameTypeDescription
intentsstring[]array of names of the intents being checked

Returns

boolean

true if the phrase contains any specified intents, otherwise it returns false.

Example

node transfer_money_confirm { do { #sayText("Do you confirm money transfer?") wait *; } transitions { agree: goto process_transfer on #messageHasAnyIntent(["confirm", "agree"]); disagree: goto cancel_transfer on #messageHasAnyIntent(["cancel", "disagree"]); } }

messageHasSentiment

Checks if the phrase being processed contains a specified sentiment.

Parameters

NameTypeDescription
sentimentstringname of the sentiment being checked

Possible sentiment values:

  • positive
  • negative

Connect sentiment skill in .dashaapp to extract sentiment from messages.

#messageHasSentiment(sentiment);

Returns

boolean

true if the phrase contains the specified sentiment, otherwise it returns false.

Example

node transfer_money_confirm { do { #sayText("Do you confirm money transfer?") wait *; } transitions { agree: goto process_transfer on #messageHasSentiment("positive"); disagree: goto cancel_transfer on #messageHasSentiment("negative"); } }

messageHasData

Checks if the phrase being processed fulfills a given filter condition.

#messageHasData(dataType, filter);

Parameters

NameTypeDescription
dataTypestringa name of a skill that generates entities being checked
filterIFiltera filter that defines the requirements for a phrase

IFilter is a map from string to boolean or string. The key of the map corresponds to the entity name, boolean value denotes if the entity must be present (true) or absent (false) in the message, string value sets a requirement for a concrete value of the entity.

Example

node transfer_money { do { #sayText("From which accounts you would like to transfer from?") wait *; } transitions { provide_data: goto validate_account on #messageHasData("account"); } onexit { provide_data: do { set $account = #messageGetData("account", { value: true })[0]?.value??""; } } }

Returns

boolean

true if the phrase fulfills the condition, false otherwise.

messageGetData

Extracts entities fulfilling a given condition.

#messageGetData(dataType, filter);

Parameters

NameTypeDescription
dataTypestringa name of a skill that generates entities being checked
filterIFiltera filter that defines the requirements for a phrase

IFilter is a map from string to boolean or string. The key of the map corresponds to the entity name, boolean value denotes if the entity must be present (true) or absent (false) in the message, string value sets a requirement for a concrete value of the entity.

Returns

object[]

An array of entities extracted from the phrase being processed. Each element of the array returned is an object which keys are strings correspond to entity names and values are string corresponds to entity values.

Example

node transfer_money { do { #sayText("From which accounts you would like to transfer from?") wait *; } transitions { provide_data: goto validate_account on #messageHasData("account"); } onexit { provide_data: do { set $account = #messageGetData("account", { value: true })[0]?.value??""; } } }
node say_food { do { for (var item in #messageGetData("food")) { #sayText(item?.value ?? ""); } } }
node say_food { do { var account = #messageGetData("account", { value: true, tag: true })[0]; if (account.tag == "source") { set $source_account = account?.value??""; } } }
external function resolve_account(info: string): string; node myNode { do { set $account = #messageGetData("account", { value: true })[0]?.value??""; set $account = external foo($account); } transitions {} }

getSentenceType

Get sentence type for current message text.

Read more about the sentence types here.

#getSentenceType();

Returns

string?

Possible values:

  • statement - Declarative sentences: Used to make statements or relay information
  • request - Imperative sentences: Used to make a command or give a direct instruction
  • question - Interrogative sentences: Used to ask a question
  • null - type of sentence is not classified (create custom intents or/and entities, then it will be classified)

Example

if (#getSentenceType() == "request") { #sayText("Sorry, I can't do it now", repeatMode: "ignore"); } else if (#getSentenceType() == "question") { #sayText("Sorry, I don't understand the question", repeatMode: "ignore"); } else { #sayText("I don't understand. Repeat please.", repeatMode: "ignore"); }

getMessageText

Returns last processed message text (what user said). Available only in digression and in onexit section of the transition with condition tag ontext.

Note:

  • tag ontext is default tag in conditions and can be ommited

Returns

String with text said by the user

Example

digression hello { conditions { on #messageHasIntent("hello"); } do { var text = #getMessageText(); #log(text); return; } } node myNode { do { wait *; } transitions { next: goto next on true; } onexit { next: do { var text = #getMessageText(); #log(text); } } } node next { do { exit; } }

Dialogue control

getCurrentTime

#getCurrentTime();

Returns

number

Time in milliseconds since the script launch

getIdleTime

#getIdleTime();

Returns

number

Time in milliseconds since the last phrase/word said by the system or the interlocutor, or since the end of the wait.

startTimer

Starts a timer in the context of the current block.

#startTimer(duration);

Parameters

NameTypeDescription
durationnumbertimer duration in milliseconds

Returns

string

The handler of the started timer.

isTimerExpired

Checks if the timer has expired.

#isTimerExpired(instanceId);

Parameters

NameTypeDescription
instanceIdstringhandler of the checked timer

Returns

boolean

true if the timer has expired, otherwise false.

getLastVisitTime

#getLastVisitTime(name);

Parameters

NameTypeDescription
namestringname of the target node

Returns

number

A time of the last visit to the specified node in the current context since the script started. Returns 0 if the node has not been visited.

getVisitCount

#getVisitCount(name);

Parameters

NameTypeDescription
namestringname of the target node

Returns

number

The number of visits to the specified node in the current context.

waitingMode

Pauses IdleTime calculation (returned by the getIdleTime call) until the specified timeout expires, or until the interlocutor's voice is detected.

#waitingMode(duration);

Parameters

NameTypeDescription
durationnumberduration to wait in milliseconds

Returns

boolean

Always true.

waitForSpeech (blocking call)

Note: For more details, see Complex logic at the start of conversation.

Blocks execution until the voice of the interlocutor is detected, or until the specified timeout expires. If timeout did expire, than ensures text event on the current (or closest next, if there is no current) segment: if text is missing in the original segment, an empty text will be added to the segment. The simulated recognition result does not contain intents

#waitForSpeech(duration);

Parameters

NameTypeDescription
durationnumberduration to wait in milliseconds

Returns

boolean

true, if a voice was found, otherwise false.

ensureRSM

Note: this function is intended to enable complex logic at the start of a conversation and should be used in pair with waitForSpeech. For more details, see Complex logic at the start of conversation.

Ensures text event on the current (or closest next, if there is no current) segment: if text is missing in the original segment, an empty text will be added to the segment.

#ensureRSM();

Returns

boolean

Always true.

setVadPauseLength

VAD refers to voice activity detection. Just as it sounds - this is the time that Dasha monitors the channel for additional user speech before responding to the user.

The function #setVadPauseLength can be helpful when a user is asked some question that may require long user answer with long pauses in speech. For example, when one is asked the question "Tell me your address, please?" the user may need some time to phrase their reply. In such a case the VAD pauses should be longer than in the case of regular conversation, for example a simple "yes-no" reply.

It sets the multiplier of the pause detection parameter (see vadPauseDelay parameter) which is used to detect the end of user speech. The default value is 1.0.

You may want to use this if you feel Dasha is too quick to respond to the user.

Parameters

NameTypeDescription
multipliernumberpause detection delay multiplier

Returns

boolean

Always true.

getAsyncBlockDescription

Gets description of specified async block.

If id is not provided (or null) then the function returns description of the current block.

Parameters

NameTypeDescription
id?stringid of async block

Returns

{ isSessionOpened: boolean; exist: boolean; id: string; }

  • isSessionOpened - true if this async block's session is ongoing
  • exist - true if this async block exists
  • id - id of this async block

isBlockMessage

Checks if current message is async block message of specified type.

If argument type is not provided, checks if current message is async block message of any type.

Params

NameTypeDescription
type?"Content"|"Terminate"|"Bridge"|"Unbridge"|"Terminated"checking type of async block

Returns

boolean

true if current message is async block message of specified type

getAsyncBlockMessage

Gets content of async block message.

Params

None

Returns

AsyncBlockMessage (see Async Block Messages)

sendMessageToAsyncBlock

Sends message from current async block to async block with targetRouteId.

Params

NameTypeDescription
targetRouteIdstringid of target async block
type"Content"|"Terminate"type of async block message
content?unknowncontent of async block message

Returns

boolean

bridgeChannel

Bridges the current async block channel and target async block channels.

If biDirectional == true the current user will hear target users (and vice versa), but target users will not hear each other (unless there are bridges between them already). Otherwise, the current user will hear the bridged users.

Params

NameTypeDescription
targetRouteIdsstring[]ids of target async blocks
biDirectionalbooleanif true the established connections will be bidirectional

Returns

boolean

unbridgeChannel

Unbridges current async block channel and target async block channels.

Params

NameTypeDescription
targetRouteIdsstring[]ids of target async blocks
biDirectionalbooleanif true the established connections will be bidirectional

Returns

boolean

Utilities

httpRequest (blocking call)

var response = #httpRequest(url: "https://example.com", method: "POST", body: "Sample data");
var response = #httpRequest(url: "https://example.com", method: "GET");

Performs an HTTP request, blocking the script execution until a response is returned, or a specified timeout is reached.

All requests are sent by the backend part of your application, not from Dasha itself. See the SDK docs to learn how to customize the requests being sent, or disable this feature altogether.

Parameters

NameTypeDefaultDescription
urlstringn/aa url to make a request to
bodyunknownnull (no body)request body
methodstring"GET"http method to use
headers{ [name: string]: string }{}headers to send
timeoutnumber0request timeout in milliseconds; 0 means wait indefinitely
requestType"json"|"text""json"if set to "json", serializes body into JSON
responseType"json"|"text""json"if set to "json", deserializes the response body as JSON

Return type

An object with the following fields:

NameTypeDescription
statusnumberHTTP status code
statusTextstringHTTP status text
headers{ [name: string]: string }HTTP response headers
responseType"json" | "text"response type as passed to #httpRequest()
bodyunknownrequest body
rawBodystringunparsed request body

Example

start node root { do { // Establishing a safe connection to user's phone #connectSafe($phone); // Waiting for 1 second to say the welcome message #waitForSpeech(1000); // Welcome message #sayText("Hi, how can i help you today?"); var url = "https://ptsv2.com/t/vwlcn-1639739558/post"; var body = "Dasha has responded"; var response = #httpRequest(url: url, method: "POST", body: body); #log(response); wait *; // Wating for reply } transitions { // Here you give directions to which nodes the conversation will go } }

random

#random();

Returns

number

A pseudo-random number between 0 and 1 (including 0 but not including 1) using a uniform distribution of the random variable.

sendDTMF

Sends DTMF message, if a current channel is a SIP.

Parameters

NameTypeDescription
codestringThe dtmf message for sending

getDTMF

Returns DTMF message(buttons press on phone), if the last received message is DTMF. Actual only for SIP channels. Actual only when you are handling messages with tag onprotocol

Returns

Returns string - if we have received DTMF now, null otherwise

Example

digression dtmf { conditions { on #getDTMF() is not null tags: onprotocol; } do { var data = #getDTMF(); #log(data); return; } }

parseInt

Converts a string to an integer.

If a string represents float number it will be floored to an integer.

If a string can not be converted into a number, the function will return NaN.

Parameters

NameTypeDescription
targetstringA string to convert into a number

Returns

number

An integer number if string could be converted into a number, NaN otherwise.

Example

node some_node { do { var num1 = #parseInt("10"); // number 10 var num2 = #parseInt("10.5280"); // number 10 var num3 = #parseInt("123aaa"); // NaN var num4 = #parseInt("aaa123"); // NaN } }

parseFloat

Converts a string to a floating-point number.

If a string can not be converted into a number, the function will return NaN.

Parameters

NameTypeDescription
targetstringA string that contains a floating-point number

Returns

number

A floating-point number if string could be converted into a number, NaN otherwise.

Example

node some_node { do { var num1 = #parseFloat("10"); // number 10 var num2 = #parseFloat("10.5280"); // number 10.5280 var num3 = #parseFloat("123.5aaa"); // NaN var num4 = #parseFloat("aaa123.5"); // NaN } }

stringify

Converts any value to a string.

Parameters

NameTypeDescription
targetunknownAn object to convert

Returns

string

Stringified object

Example

node some_node { do { var pi = 3.1415; #sayText("the value of pi is " + #stringify(pi)); var obj = {a: 1}; #log("obj = " + #stringify(obj)); } }

log

Outputs message to the application output. The message may be any object that could be created in DSL.

Parameters

NameTypeDescription
targetunknownAn object to log

Transcription

getTranscription

Returns a dialogue transcription as an array of objects.

Parameters

NameTypeDescription
optionsTranscriptionOptionsTranscription options

Returns

{ source: "human"|"ai"; text: string; name: string?; }[]

Where name is a humanName option of connectOptions or null if not set.

getFormattedTranscription

Returns a dialogue transcription as a string

Parameters

NameTypeDescription
optionsTranscriptionOptionsTranscription options

Returns

  • If humanName in #connect/#connectSafe not set
ai: AI Sentence human: Human Sentence
  • If humanName in #connect/#connectSafe is set to agent
ai: AI sentence agent: Agent sentence
  • Warm transfer with humanName set for the agent connection and #disableRecognition() was not called, and option channelIds is set to []
ai: AI sentence human: Human sentence .... agent: Agent sentence human: Human sentence ...

TranscriptionOptions

NameTypeDescription
channelIdsstring[]?Optional, for getting transcription in multiple connections. Empty array - get all. null - current connection
historyLengthnumber?Number of phrases by the end of the logged history
Found a mistake? Email us, and we'll send you a free t-shirt!

Enroll in beta

Request invite to our private Beta program for developers to join the waitlist. No spam, we promise.