Built-in functions of DashaScript dialogue design | Dasha AI

DashaScript has a number of built-in functions to give you intricate control over the interactions Dasha AI has with your users. The functions are designed in a way to let you create human-like conversational experiences for your users.

The built-in functions are inline and can be called from the executable section (sections do or transitions within a node or a digression). Built-in functions are identified by the # prefix.

node myNode { do { // Calling the "sayText" function. #sayText("Wake up, Neo. The Matrix has you."); #sayText("Do you want to follow the white rabbit?"); wait *; } transitions { // calling messageHasIntent function transition1: goto myNode2 on #messageHasIntent("yes"); transition2: goto myNode2 on #messageHasIntent("no"); } onexit { transition1: do { // calling #log function to console.log #log("He follows the white rabbit."); } transition2: do { // calling #log function to console.log #log("He does not follow the white rabbit."); } } }

Blocking calls

Blocking call (synchronous) functions block execution of further oeprations until the function is resolved. When it is resolved, the function returns the result and code execution continues.

GPT

answerWithGPT

Answer to the user using GPT with current history and ability to call DSL functions

Parameters

NameTypeDescription
promptstringA string with GPT prompt
gptOptions{ model: string; openai_apikey: string; }model - name of model, for example openai/gpt-4, openai_apikey - your APIKEY for OpenAI
repeatMode?IRepeatModecontrol repeating logic, (see IRepeatMode) default value: override
interruptible?booleanwhether the phrase could be interrupted, default value: false
sayOptions?{speed?: number; emotion?: string; interruptDelay?: number; interruptConditions?: (InterruptCondition|InterruptCondition[])[]`` }Override speed and emotion of saying text, delay before interrupting by human voice and interruptConditions (see Say Options)

Returns

Returns an object with fields:

  • interrupted - true, if GPT was interrupted by user
  • functionCalled - true, if function was called during or instead of answer
  • completed - true, if GPT has completed answering (not interrupted and connection to the user was not closed)

Example

node gpt { do { // We will be here when user says something, or retry is required var a = #answerWithGPT(`Your name is Mary. You are working in fruit seller contact center. You can only tell customer a price of requested fruit. If you have no price for fruit tell: We have no such fruit `, interruptible:true, gptOptions: { model:"openai/gpt-4", openai_apikey: "YOUR_APIKEY" }, sayOptions: { interruptDelay: 1.0 }); // Call answerWithGPT one more time for passing result to the GPT if (a.functionCalled) { #log("Called a function, retry"); goto retry; } wait *; } transitions { gpt: goto gpt on true; retry: goto gpt; } }

Session control

connect (blocking call)

Attempts to establish a connection (starts a call) and returns the result of the attempt. Blocks execution until the result of the connection attempt is received.

#connect(endpoint);

Parameters

NameTypeDescription
endpointstringendpoint to connect to

Returns

object

Object depends on connection attempt outcome.

On success: returns an object with the following fields:

NameTypeDescription
msgIdstring literalcontains value "OpenedSessionChannelMessage"
endpointstringendpoint with which a connection is established

On failure: returns an object with the following fields:

NameTypeDescription
msgIdstring literalcontains value "FailedOpenSessionChannelMessage"
reasonstringthe reason why the connection could not be established
detailsstringadditional information

connectSafe (blocking call)

Establishes a connection (starts a call) and returns the result of the attempt. If the connection failed, it stops script execution with the error "Connection failed". Blocks execution until the result of the connection attempt is received.

#connectSafe(endpoint);

Parameters

NameTypeDescription
endpointstringendpoint to connect to

Returns

object

Object with the following fields:

NameTypeDescription
messageIConnectionResultmessage received in response to a connection request

IConnectionResult is an object with the following fields:

NameTypeDescription
msgIdstring literalcontains value "OpenedSessionChannelMessage"
endpointstringendpoint with which a connection is established

forward

Emits SIP Refer with Refer-To username equals to the endpoint for SIP-based channels. Nothing for a text-based channel (chat).

#forward(endpoint);

Parameters

NameTypeDescription
endpointstringendpoint to connect to

Returns

boolean

Always true.

disconnect (blocking call)

Closes the connection (ends the call). Blocks execution until the result of the attempt to close the connection is received.

#disconnect();

Returns

object

Object with the following fields:

NameTypeDescription
hangupMsgIdstringthe identifier of the message indicating that the connection was closed

disableRecognition

Disable speach-to-text (STT) and natural-language-understanding (NLU) in current block or main context (the main graph is implicitly a block too.).

Note that recognition disabling interrupts the calculation of duration of the current session.

#disableRecognition();

Returns

boolean

Always true.

enableRecognition

Enables speach-to-text (STT) and natural-language-understanding (NLU) in current block or main context (the main graph is implicitly a block too.).

Note that enabling of recognition turns on the calculation of duration of the current session.

#enableRecognition();

Returns

boolean

Always true.

NLG Control

say (blocking call)

Sends a command to say a phrase from connected phrasemap.

Blocks control until the phrase had been said (or interrupted).

#say(phraseId, args, repeatMode, interruptible, options);

Parameters

NameTypeDescription
phraseIdstringphrasemap key of the phrase to be spoken
args?{[x:string]:unknown;}dynamic phrase arguments that will be used to construct phrase (see NLG doc)
repeatMode?IRepeatModecontrol repeating logic, (see IRepeatMode) default value: override
interruptible?booleanwhether the phrase could be interrupted, default value: false
options?{speed?: number; emotion?: string; interruptDelay?: number; interruptConditions?: (InterruptCondition|InterruptCondition[])[]`` }Override speed and emotion of saying text, delay before interrupting by human voice and interruptConditions (see Say Options)

Returns

boolean

true if the phrase was said completely, false if it was interrupted.

sayChanneled (blocking call)

#sayChanneled(phraseId, args, channelIds, options);

Channeled version of the builtin function #say.

Similarly to the #say it sends a command to pronounce a phrase from a phrasemap.

Unlike the #say function, #sayChanneled allows you to pronounce phrase in the specific channels provided by channelIds parameter. If channelIds is not provided or provided as an empty array, then the phrase will be pronounced in all channels. It does not affect phrase buffer, so it does not have repeatMode parameter.

Blocks control until the phrase is spoken.

Parameters

NameTypeDescription
phraseIdstringPhrasemap key of the phrase to be spoken
args?{[x:string]:unknown;}Dynamic phrase arguments that will be used to construct phrase (see NLG doc)
channelIds?string[]Channel ids send command to pronounce to. If is not provided or is empty array, the command will be sent to all available channels
options?{ speed?: number; emotion?: string; useGluing?: boolean; }Override speed and emotion of saying text (see Say Options)

Returns

boolean

Always true as it can not be interrupted.

sayText (blocking call)

Sends a command to say the text. Differs from say in that the text parameter is interpreted literally rather than being resolved via phrasemap.

Blocks control until the phrase had been said (or interrupted).

Warning: it is not recommended to use this function in production.

More details This function does not require phrasemap at the cost of no static analysis is performed of whether the phrase is available (in case of prerecorded speech) and no pre-synthesis is performed (in case of synthesized speech) which may cause delays at runtime as well as no caching will be used which cause re-synthesizing on each run.

Thus, this function may be used for demonstration purposes allowing the script to be as simple as possible, but it in general it is not recommended to use it in production cases.

#sayText(text, repeatMode, interruptible, options);

Parameters

NameTypeDescription
textstringtext to say
repeatMode?IRepeatModecontrol repeating logic, (see IRepeatMode) default value: override
interruptible?booleanwhether the phrase could be interrupted, default value: false
options?{ speed?: number; emotion?: string; interruptDelay?: number; interruptConditions?: (InterruptCondition|InterruptCondition[])[]`` }Control speed, emotion of saying text, delay before interrupting by human voice and interruptConditions (see Say Options)

Returns

boolean

true if the phrase was completely said, false if it was interrupted.

Examples

// just say some text #sayText("Hello, how are you?"); // ---------- // say this text (will not be repeated if you call #repeat #sayText("Hello, how are you?", "ignore"); // ---------- #sayText("Hello, how are you?"); // add to phrase buffer. #sayText("Would you like some coffee?", "complement"); // Call of the #repeat will say "Hello, how are you? Would you like some coffee?" // ---------- // this phrase can be interrupted by user, if the one says something // during `options.interruptDelay` sec (default value is 2 sec) #sayText("Hello, how are you?", interruptible: true); // ---------- // say this phrase fast #sayText("Hello, how are you?", options: { speed: 1.5 }); // ---------- // say this phrase with emotion extracted from text "I love you" // NOTE: you need to enable emotional dasha tts with `conv.audio.tts = "dasha-emotional"; #sayText("Hello, how are you?", options: { emotion: "from text: I love you" }); // ---------- // say this phrase with emotion extracted from text "I love you" and do it slow #sayText("Hello, how are you?", options: { emotion: "from text: I love you", speed: 0.7 }); // ---------- // build phrase from variable #sayText("Hello" + $name + ", how are you?"); // ---------- // this phrase can be interrupted by intent "wait" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ intent: "wait" }] } );

sayTextChanneled (blocking call)

Channeled version of the builtin function #sayText.

Similarly to the #sayText it sends a command to pronounce a phrase from a phrasemap.

Unlike the #sayText function, #sayTextChanneled allows you to pronounce phrase in the specific channels provided by channelIds parameter. If channelIds is not provided or provided as an empty array, then the phrase will be pronounced in all channels. It does not affect phrase buffer, so it does not have repeatMode parameter. Also, it cannot be interrupted.

Blocks control until the phrase is spoken.

NameTypeDescription
textstringText to say
channelIds?string[]Channel ids send command to pronounce to. If is not provided or is empty array, the command will be sent to all available channels
options?{ speed?: number; emotion?: string; }Control speed and emotion of saying text (see Say Options)

Returns

boolean

Always true as it can not be interrupted.

repeat (blocking call)

Sends a command to repeat phrases from the phrase buffer (see repeatMode argument of say or sayText).

Due to the variability of phrases (which set in phrasemap) calling the #repeat() may enliven a dialogue and help the dialogue sound more human-like.

Blocks control until the phrase had been said (or interrupted).

#repeat(accuracy, interruptible, options);

Parameters

NameTypeDescription
accuracy?IRepeatAccuracyPhrasemap sub-key to use (default value: "repeat")
interruptible?booleanWhether the phrase could be interrupted, default value: false
options?{ interruptDelay?: number; }Control delay before interrupt (see Say Options)

IRepeatAccuracy is a enum of string literals that denote which phrasemap sub-key will be used to pronounce:

  • first
  • repeat
  • short

Returns

boolean

true if the phrase was completely said, false if it was interrupted.

Phrase buffer

The phrase buffer is a system buffer used to accumulate phrases that you may pronounce again to repeat them to a user.

Use builtin function #repeat() to repeat phrases accumulated in buffer.

Due to the variability of phrases (which set in phrasemap) calling the #repeat() may enliven a dialogue and help the dialogue sound more human-like.

The content of phrase buffer is controlled by repeatMode argument used to invoke phrases pronunciation (see IRepeatMode).

IRepeatMode

IRepeatMode is a enum of string literals, which control phrase buffer update logic:

  • override: override phrase buffer. If the phrase is pronounced with repeatMode:"override", the buffer will be cleaned and the phrase will be appended to empty buffer.
  • complement: append to the phrase buffer. The phrase buffer will not be cleaned and the current phrase will be appended to the buffer.
  • ignore: do not change the phrase buffer. The phrase buffer will not be cleaned and the current phrase will not be appended to the buffer.

Example

Suppose, you have the following phrasemap content:

{ "default": { "voiceInfo": { "lang": "en-US", "speaker": "default" }, "phrases": { "how_are_you": { "first": [{ "text": "How are you doing?" }], "repeat": [{ "text": "How are you today?" }] }, "i_said": [{ "text": "I was saying..." }] } } }

... and the following DSL code:

#sayText("Looks like the weather is good today!"); // default repeatMode is "override" // buffer: ["Looks like the weather is good today!"] #sayText("I am wondering...", repeatMode: "override"); // override buffer content // buffer: ["I am wondering..."] #say("how_are_you", repeatMode: "complement"); // buffer: ["I am wondering...", "how_are_you"] /** ... suppose some dialogue happens here ... */ #say("i_said", repeatMode: "ignore"); // buffer stays the same // buffer: ["I am wondering...", "how_are_you"] #repeat();

, then the last #repeat will actually trigger the following text to be pronounced by Dasha:

AI: "I am wondering..." AI: "How are you today?"

So, the current buffer content was pronounced and (where possible) the "repeat" versions of phrases were used.

preparePhrase

Sends a command to prepare a phrase.

If phrase specified in phrasemap is static (i.e. it does not have any arguments) it is prepared automatically prepared before dialog begins.

Otherwise, if phrase is dynamic and requires some arguments, it can not be prepared in advance and since that it is prepared when calling #say function.

The preparation process requires some time, so if unprepared phrase is being prepared in runtime, it may cause lags in dialogue.

To avoid this you may use the #preparePhrase function before the actual dialogue is started to force prepare some dynamic phrase with particular arguments.

The parameters phraseId and options are the same as for #say function.

Parameters

NameTypeDescription
phraseIdstringphrasemap key of the phrase to be spoken
options{emotion?: string; speed?: number;}Override speed and emotion of saying text (see Say Options)

Returns

boolean

Always true.

Say Options

NOTE: Different NLG control functions consume different options. See the description of the desired function.

speed

NameTypeDescription
speednumberDefines speed of a synthesized phrase. Default value is 1

emotion

NameTypeDescription
emotionstringDefines emotion of a synthesized phrase. Default value is "Neutral"

NOTE: you need to enable emotional dasha tts with `conv.audio.tts = "dasha-emotional";

interruptDelay

NameTypeDescription
interruptDelaynumberDefines time in sec which needed to interrupt pronouncing phrase by human voice. Default value is 2

useGluing

NameTypeDescription
useGluingbooleanIf true, composite phrasemap phrases will be concatenated before synthesizing. Default value is true (recomended)

interruptConditions

NameTypeDescription
interruptConditions(InterruptCondition|InterruptCondition[])[] (see description below)Defines triggers in user's utterance to interrupt currently pronouncing phrase.

NOTE: If interruptDelay set to some specific value, it will be consider as additional trigger. Otherwise, the default behaviour (interrupting by 2-second-long utterance) will be ignored.

type InterruptCondition = // triggers when utterance has positive intention { sentiment: "positive"|"negative"; } | // triggers when utterance contains specific entity { entity: string; value?: string; tag?: string; } | // triggers when utterance contains specific intent { intent: string; sentiment?: "positive"|"negative"; } | // triggers when utterance contains specific intent (same as { intent: string }) string;

Examples

// this phrase can be interrupted if user's utterance contains negative intention, // e.g. "no no no no" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ sentiment: "negative" }] } ); // ---------- // this phrase can be interrupted by intent "wait" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ intent: "wait" }] } ); // same as above #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: "wait" } ); // ---------- // this phrase can be interrupted by entity "fruit" of particular value "apple" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [{ entity: "fruit", value: "apple" }] } ); // ---------- // this phrase can be interrupted by // EITHER entity "fruit" (of any value) and intent "wait" // OR entity "fruit" of particular value "apple" #sayText("You know, I'm gonna tell you some very long long story...", options: { interruptConditions: [ [{ entity: "fruit" }, { intent: "wait" }], { entity: "fruit", value: "apple" } ] } );

NLU Control

messageHasIntent

Checks if the phrase being processed contains the specified intent.

#messageHasIntent(intent, state);

Parameters

NameTypeDescription
intentstringthe name of the intent being checked
statestringpolarity state, default value: positive

State defines intent polarity. It works only for intent that allows it. Read here how to create custom polar intents.

Possible state values:

  • positive
  • negative

Returns

boolean

true if the phrase contains the specified intent, otherwise it returns false.

Example

digression hello { conditions { on #messageHasIntent("hello"); } do { } }
node can_i_help { do { #sayText("How can I help?"); wait *; } transitions { transfer_money: goto transfer_money on #messageHasIntent("transfer_money"); } }
node transfer_money_confirm { do { #sayText("Do you confirm money transfer?") wait *; } transitions { positive: goto process_transfer on #messageHasIntent("agreement", "positive"); negative: goto transfer_money on #messageHasIntent("agreement", "negative"); }

messageHasAnyIntent

Checks if the phrase being processed contains any of the specified intents.

#messageHasAnyIntent(intents);

Parameters

NameTypeDescription
intentsstring[]array of names of the intents being checked

Returns

boolean

true if the phrase contains any specified intents, otherwise it returns false.

Example

node transfer_money_confirm { do { #sayText("Do you confirm money transfer?") wait *; } transitions { agree: goto process_transfer on #messageHasAnyIntent(["confirm", "agree"]); disagree: goto cancel_transfer on #messageHasAnyIntent(["cancel", "disagree"]); } }

messageHasSentiment

Checks if the phrase being processed contains a specified sentiment.

Parameters

NameTypeDescription
sentimentstringname of the sentiment being checked

Possible sentiment values:

  • positive
  • negative

Connect sentiment skill in .dashaapp to extract sentiment from messages.

#messageHasSentiment(sentiment);

Returns

boolean

true if the phrase contains the specified sentiment, otherwise it returns false.

Example

node transfer_money_confirm { do { #sayText("Do you confirm money transfer?") wait *; } transitions { agree: goto process_transfer on #messageHasSentiment("positive"); disagree: goto cancel_transfer on #messageHasSentiment("negative"); } }

messageHasData

Checks if the phrase being processed fulfills a given filter condition.

#messageHasData(dataType, filter);

Parameters

NameTypeDescription
dataTypestringa name of a skill that generates entities being checked
filterIFiltera filter that defines the requirements for a phrase

IFilter is a map from string to boolean or string. The key of the map corresponds to the entity name, boolean value denotes if the entity must be present (true) or absent (false) in the message, string value sets a requirement for a concrete value of the entity.

Example

node transfer_money { do { #sayText("From which accounts you would like to transfer from?") wait *; } transitions { provide_data: goto validate_account on #messageHasData("account"); } onexit { provide_data: do { set $account = #messageGetData("account", { value: true })[0]?.value??""; } } }

Returns

boolean

true if the phrase fulfills the condition, false otherwise.

messageGetData

Extracts entities fulfilling a given condition.

#messageGetData(dataType, filter);

Parameters

NameTypeDescription
dataTypestringa name of a skill that generates entities being checked
filterIFiltera filter that defines the requirements for a phrase

IFilter is a map from string to boolean or string. The key of the map corresponds to the entity name, boolean value denotes if the entity must be present (true) or absent (false) in the message, string value sets a requirement for a concrete value of the entity.

Returns

object[]

An array of entities extracted from the phrase being processed. Each element of the array returned is an object which keys are strings correspond to entity names and values are string corresponds to entity values.

Example

node transfer_money { do { #sayText("From which accounts you would like to transfer from?") wait *; } transitions { provide_data: goto validate_account on #messageHasData("account"); } onexit { provide_data: do { set $account = #messageGetData("account", { value: true })[0]?.value??""; } } }
node say_food { do { for (var item in #messageGetData("food")) { #sayText(item?.value ?? ""); } } }
node say_food { do { var account = #messageGetData("account", { value: true, tag: true })[0]; if (account.tag == "source") { set $source_account = account?.value??""; } } }
external function resolve_account(info: string): string; node myNode { do { set $account = #messageGetData("account", { value: true })[0]?.value??""; set $account = external foo($account); } transitions {} }

getSentenceType

Get sentence type for current message text.

Read more about the sentence types here.

#getSentenceType();

Returns

string?

Possible values:

  • statement - Declarative sentences: Used to make statements or relay information
  • request - Imperative sentences: Used to make a command or give a direct instruction
  • question - Interrogative sentences: Used to ask a question
  • null - type of sentence is not classified (create custom intents or/and entities, then it will be classified)

Example

if (#getSentenceType() == "request") { #sayText("Sorry, I can't do it now", repeatMode: "ignore"); } else if (#getSentenceType() == "question") { #sayText("Sorry, I don't understand the question", repeatMode: "ignore"); } else { #sayText("I don't understand. Repeat please.", repeatMode: "ignore"); }

getMessageText

Returns last processed message text (what user said). Available only in digression and in onexit section of the transition with condition tag ontext.

Note:

  • tag ontext is default tag in conditions and can be ommited

Returns

String with text said by the user

Example

digression hello { conditions { on #messageHasIntent("hello"); } do { var text = #getMessageText(); #log(text); return; } } node myNode { do { wait *; } transitions { next: goto next on true; } onexit { next: do { var text = #getMessageText(); #log(text); } } } node next { do { exit; } }

Dialogue control

getCurrentTime

#getCurrentTime();

Returns

number

Time in milliseconds since the script launch

getIdleTime

#getIdleTime();

Returns

number

Time in milliseconds since the last phrase/word said by the system or the interlocutor, or since the end of the wait.

startTimer

Starts a timer in the context of the current block.

#startTimer(duration);

Parameters

NameTypeDescription
durationnumbertimer duration in milliseconds

Returns

string

The handler of the started timer.

isTimerExpired

Checks if the timer has expired.

#isTimerExpired(instanceId);

Parameters

NameTypeDescription
instanceIdstringhandler of the checked timer

Returns

boolean

true if the timer has expired, otherwise false.

getLastVisitTime

#getLastVisitTime(name);

Parameters

NameTypeDescription
namestringname of the target node

Returns

number

A time of the last visit to the specified node in the current context since the script started. Returns 0 if the node has not been visited.

getVisitCount

#getVisitCount(name);

Parameters

NameTypeDescription
namestringname of the target node

Returns

number

The number of visits to the specified node in the current context.

waitingMode

Pauses IdleTime calculation (returned by the getIdleTime call) until the specified timeout expires, or until the interlocutor's voice is detected.

#waitingMode(duration);

Parameters

NameTypeDescription
durationnumberduration to wait in milliseconds

Returns

boolean

Always true.

waitForSpeech (blocking call)

Note: For more details, see Complex logic at the start of conversation.

Blocks execution until the voice of the interlocutor is detected, or until the specified timeout expires. If timeout did expire, than ensures text event on the current (or closest next, if there is no current) segment: if text is missing in the original segment, an empty text will be added to the segment. The simulated recognition result does not contain intents

#waitForSpeech(duration);

Parameters

NameTypeDescription
durationnumberduration to wait in milliseconds

Returns

boolean

true, if a voice was found, otherwise false.

ensureRSM

Note: this function is intended to enable complex logic at the start of a conversation and should be used in pair with waitForSpeech. For more details, see Complex logic at the start of conversation.

Ensures text event on the current (or closest next, if there is no current) segment: if text is missing in the original segment, an empty text will be added to the segment.

#ensureRSM();

Returns

boolean

Always true.

setVadPauseLength

VAD refers to voice activity detection. Just as it sounds - this is the time that Dasha monitors the channel for additional user speech before responding to the user.

The function #setVadPauseLength can be helpful when a user is asked some question that may require long user answer with long pauses in speech. For example, when one is asked the question "Tell me your address, please?" the user may need some time to phrase their reply. In such a case the VAD pauses should be longer than in the case of regular conversation, for example a simple "yes-no" reply.

It sets the multiplier of the pause detection parameter (see vadPauseDelay parameter) which is used to detect the end of user speech. The default value is 1.0.

You may want to use this if you feel Dasha is too quick to respond to the user.

Parameters

NameTypeDescription
multipliernumberpause detection delay multiplier

Returns

boolean

Always true.

getAsyncBlockDescription

Gets description of specified async block.

If id is not provided (or null) then the function returns description of the current block.

Parameters

NameTypeDescription
id?stringid of async block

Returns

{ isSessionOpened: boolean; exist: boolean; id: string; }

  • isSessionOpened - true if this async block's session is ongoing
  • exist - true if this async block exists
  • id - id of this async block

isBlockMessage

Checks if current message is async block message of specified type.

If argument type is not provided, checks if current message is async block message of any type.

Params

NameTypeDescription
type?"Content"|"Terminate"|"Bridge"|"Unbridge"|"Terminated"checking type of async block

Returns

boolean

true if current message is async block message of specified type

getAsyncBlockMessage

Gets content of async block message.

Params

None

Returns

AsyncBlockMessage (see Async Block Messages)

sendMessageToAsyncBlock

Sends message from current async block to async block with targetRouteId.

Params

NameTypeDescription
targetRouteIdstringid of target async block
type"Content"|"Terminate"type of async block message
content?unknowncontent of async block message

Returns

boolean

bridgeChannel

Bridges the current async block channel and target async block channels.

If biDirectional == true the current user will hear target users (and vice versa), but target users will not hear each other (unless there are bridges between them already). Otherwise, the current user will hear the bridged users.

Params

NameTypeDescription
targetRouteIdsstring[]ids of target async blocks
biDirectionalbooleanif true the established connections will be bidirectional

Returns

boolean

unbridgeChannel

Unbridges current async block channel and target async block channels.

Params

NameTypeDescription
targetRouteIdsstring[]ids of target async blocks
biDirectionalbooleanif true the established connections will be bidirectional

Returns

boolean

Utilities

httpRequest (blocking call)

var response = #httpRequest(url: "https://example.com", method: "POST", body: "Sample data");
var response = #httpRequest(url: "https://example.com", method: "GET");

Performs an HTTP request, blocking the script execution until a response is returned, or a specified timeout is reached.

All requests are sent by the backend part of your application, not from Dasha itself. See the SDK docs to learn how to customize the requests being sent, or disable this feature altogether.

Parameters

NameTypeDefaultDescription
urlstringn/aa url to make a request to
bodyunknownnull (no body)request body
methodstring"GET"http method to use
headers{ [name: string]: string }{}headers to send
timeoutnumber0request timeout in milliseconds; 0 means wait indefinitely
requestType"json"|"text""json"if set to "json", serializes body into JSON
responseType"json"|"text""json"if set to "json", deserializes the response body as JSON

Return type

An object with the following fields:

NameTypeDescription
statusnumberHTTP status code
statusTextstringHTTP status text
headers{ [name: string]: string }HTTP response headers
responseType"json" | "text"response type as passed to #httpRequest()
bodyunknownrequest body
rawBodystringunparsed request body

Example

start node root { do { // Establishing a safe connection to user's phone #connectSafe($phone); // Waiting for 1 second to say the welcome message #waitForSpeech(1000); // Welcome message #sayText("Hi, how can i help you today?"); var url = "https://ptsv2.com/t/vwlcn-1639739558/post"; var body = "Dasha has responded"; var response = #httpRequest(url: url, method: "POST", body: body); #log(response); wait *; // Wating for reply } transitions { // Here you give directions to which nodes the conversation will go } }

random

#random();

Returns

number

A pseudo-random number between 0 and 1 (including 0 but not including 1) using a uniform distribution of the random variable.

sendDTMF

Sends DTMF message, if a current channel is a SIP.

Parameters

NameTypeDescription
codestringThe dtmf message for sending

getDTMF

Returns DTMF message(buttons press on phone), if the last received message is DTMF. Actual only for SIP channels. Actual only when you are handling messages with tag onprotocol

Returns

Returns string - if we have received DTMF now, null otherwise

Example

digression dtmf { conditions { on #getDTMF() is not null tags: onprotocol; } do { var data = #getDTMF(); #log(data); return; } }

parseInt

Converts a string to an integer.

If a string represents float number it will be floored to an integer.

If a string can not be converted into a number, the function will return NaN.

Parameters

NameTypeDescription
targetstringA string to convert into a number

Returns

number

An integer number if string could be converted into a number, NaN otherwise.

Example

node some_node { do { var num1 = #parseInt("10"); // number 10 var num2 = #parseInt("10.5280"); // number 10 var num3 = #parseInt("123aaa"); // NaN var num4 = #parseInt("aaa123"); // NaN } }

parseFloat

Converts a string to a floating-point number.

If a string can not be converted into a number, the function will return NaN.

Parameters

NameTypeDescription
targetstringA string that contains a floating-point number

Returns

number

A floating-point number if string could be converted into a number, NaN otherwise.

Example

node some_node { do { var num1 = #parseFloat("10"); // number 10 var num2 = #parseFloat("10.5280"); // number 10.5280 var num3 = #parseFloat("123.5aaa"); // NaN var num4 = #parseFloat("aaa123.5"); // NaN } }

stringify

Converts any value to a string.

Parameters

NameTypeDescription
targetunknownAn object to convert

Returns

string

Stringified object

Example

node some_node { do { var pi = 3.1415; #sayText("the value of pi is " + #stringify(pi)); var obj = {a: 1}; #log("obj = " + #stringify(obj)); } }

log

Outputs message to the application output. The message may be any object that could be created in DSL.

Parameters

NameTypeDescription
targetunknownAn object to log
Found a mistake? Email us, and we'll send you a free t-shirt!

Enroll in beta

Request invite to our private Beta program for developers to join the waitlist. No spam, we promise.