Application configuration
Application configuration contains two parts
- Global application configuration, that is described in
.dashaapp
file - Session configuration, that describes part of configuration related to dialogue instance
Global configuration
The global configuration is stored in the .dashaapp
file and contains
Application name
Mandatory lowercase name of an application, required to identify your application
Dialogue configuration
Mandatory section, that describes where to find your conversational model
- path to
main
DashaScript file with your conversational model - path to
view
file, that describes how to render your model in UI
NLG Section
Optional section, that describes where to find the configuration of NLG module of your dialogue
NLU Section
The optional section determines how an application understands natural language.
Properties
Property | Type | Description |
---|---|---|
skills | string[] | List of skills used by the application to extract meaning from natural language. |
language | string | BCP 47 language tag denoting a target language (e. g. "en-US" ). |
customIntents | CustomIntentsIncludeConfig? | Definition of custom intents. |
disabledIntentBySkill | DisabledIntentBySkill? | Specific intents disabled from concrete skills while processing message. |
disabledDataFactBySkill | DisabledDataFactBySkill? | Specific data facts disabled from concrete skills while processing message. |
Disabled intent by skill
Description of which intents should not be allocated from certain skills.
DisabledIntentBySkill
- is a map of string
representation of skill and array of string
representation of intents which shoud not be extracted.
Disabled data fact by skill
Description of which data facts should not be allocated from certain skills.
DisabledDataFactBySkill
- is a map of string
representation of skill and array of string
representation of data facts which shoud not be extracted.
Custom model include configuration
Configuration of custom model. Information required to customize the extraction of meaning from natural language text.
Properties
Property | Type | Description |
---|---|---|
file | string | Path to the file containing custom intents and/or entities. |
Example of .dashaapp file
{ "formatVersion": "2", "nlu": { "skills": [ "common_phrases", "common-parser", "sentiment" ], "customIntents": { "file": "data.json" }, "disabledIntentBySkill": { "common_phrases": [ "how_do_you_do" ] }, "language": "en-US" }, "nlg": { "type": "phrases", "file": "phrasemap.json" }, "dialogue": { "file": "main.dsl", "view": "view.json" }, "name": "appname", "description": "description" }
Connection system intents
- Select skills list, that contains intents selected in System intents document
- Add skills to
nlu.skills
list in.dashaapp
file - Start your application and enjoy NLU!
Connecting custom intents
Create
data.json
file in one folder with your.dashaapp
fileAdd custom intents and/or custom entities to
data.json
Add path to dataset file
data.json
to sectionnlu.customIntents
indashaapp
file"customIntents": { "file": "data.json" }
Start your application and enjoy NLU!
Improve your model to achieve better results - Improving models
Example of dashaapp file (project.dashaapp
):
{ "formatVersion": "2", "name": "cool-project", "description": "", "nlu": { "skills": [ "common_phrases", "sentiment" ], "language": "en-US", "customIntents": { "file": "data.json" } }, "nlg": { "type": "phrases", "file": "phrasemap.json", "signatureFile": "phrasemapSignature.json" }, "dialogue": { "file": "main.dsl", "view": "view.json" } }
Per-conversation configuration
Some options can be set on a per-conversation basis, as properties of the conversation object. Here is an example:
const conv = app.createConversation(); conv.audio.tts = "default"; conv.audio.stt = "default"; conv.audio.noiseVolume = 0.5; conv.audio.vadPauseDelay = 0.8; conv.sip.config = "default";
To choose whether a conversation should use text or audio communication, use the channel
option of conv.execute()
.
// to start a text conversation await conv.execute({ channel: "text" }); // to start an audio conversation await conv.execute({ channel: "audio" });