Release Notes
Release 2025-07-09 (1.9.46)
Features
- TTS Enhancements:
- Added Inworld TTS support - platform now supports Inworld's text-to-speech capabilities with advanced speech modification features
- Enhanced speech expressiveness through inline tags for emotions, actions, and vocal effects
- Voice Activity Detection (VAD) Enhancements:
- Added
vad: "asap_v2"
support - detects end of voice much faster than previous versions, but may have false positives in the middle of sentences
- Added
- Playground Enhancements:
- Inspector: Improved accuracy of start of voice and end of voice view for better conversation analysis
- DSL Language Enhancements:
- Added
findIndex
function for arrays - returns the index of the first element that matches the specified value, or -1 if no element matches - Supports deep equality comparison for objects to determine element matching
- Full documentation available here
- Added
merge
function for objects - performs a shallow merge of two objects, combining their properties at the top level only - Added
deepMerge
function for objects - performs a deep merge of two objects, combining their properties recursively with nested object merging - Full documentation available here
- Added
Configuration Examples
Inworld TTS Voice Configuration Example:
{ "speed": 1, "speaker": "Inworld/inworld-tts-1/Alex", "lang": "en-US", "options": { "temperature": 0.8, "pitch": 0 } }
Inworld TTS Speech Modification Example (Prompt):
Insert in the text following tags in each sentence: In the start of sentence: [happy], [sad], [angry], [surprised], [fearful], [disgusted] [laughing], [whispering] In any part of sentence [breathe], [clear_throat], [cough], [laugh], [sigh], [yawn] To make conversation more natural. Avoid newlines after the tag in [].
Release 2025-06-18 (1.9.40)
Features
- GPT Function Access Control:
- Added
provided_scopes
option to control which functions are available to GPT by specifying allowed scopes. Functions can now be organized into logical groups using@scope
tag in JavaDoc comments - Added
except_functions
option to explicitly exclude specific functions from being available to GPT, with highest priority over other inclusion rules - Enhanced function filtering system with clear priority: scopes → provided_functions → except_functions
- Full documentation available here
- Added
- LLM Interruption Handling:
- Enhanced interruption handling system with comprehensive documentation and best practices
- Added sophisticated silence management with
keep_silence()
function for preventing interruptions during user dictation - Implemented smart wait request handling with
handle_wait_request()
function for managing user pause requests - Improved smart interruption logic that analyzes conversation context to determine appropriate responses
- Added automatic hello ping disabling during wait periods to prevent unwanted interruptions
- Full documentation available here
- OpenAI GPT Enhancements:
- Added
seed
parameter support for deterministic sampling - when specified, repeated requests with the same seed and parameters should return more consistent results - Added
service_tier
parameter support for controlling OpenAI latency tiers - allows customers to specify processing tiers including 'auto' (default), 'default', and 'flex' for scale tier subscribers - Added support for
openrouter/
model prefix - users can now access OpenRouter models by prefixing model names with "openrouter/" (e.g., "openrouter/anthropic/claude-3-sonnet") - Added
openrouter_models
parameter for model routing - allows automatic fallback to alternative models if the primary model is unavailable or rate-limited - Added
openrouter_provider
parameter for provider routing - enables fine-grained control over provider selection, load balancing, and routing preferences (JSON object as string)
- Added
- Playground Enhancements:
- Added groups management functionality - users can now share limits between applications by organizing them into groups in the Playground Groups interface
- DSL Language Enhancements:
- Added
tone
event support forwhen
constructions - allows immediate reaction when a tone in Hz is received as specified in#connect
or#connectSafe
call
- Added
- Node.js SDK Updates:
- @dasha.ai/sdk v0.11.5:
- Added support for scopes in dynamic tools - developers can now assign scopes to SDK-provided functions for better access control
- @dasha.ai/sdk v0.11.5:
Bug Fixes
- HTTP Request Improvements:
- Fixed
httpRequest
function behavior when users hang up - HTTP requests will no longer be interrupted and return null when a user disconnects during the request execution
- Fixed
Release 2025-05-19 (1.9.37)
Features
- Inspector Enhancements:
- Added information about who closed the call (human or AI) to the Inspector tool
- Transcript Improvements:
- Added
allow_incomplete
option (true
orfalse
) togetTranscription
,getFormattedTranscription
,askGPT
andanswerWithGPT
functions - allows viewing text that is currently being spoken by the human - Added
include_system_messages
option to control visibility of system messages in transcription (default value:true
)
- Added
- DSL Type System Improvements:
- Fixed functions for types built in DSL (allowing calls of functions for other types)
- History Management:
- Added addHistoryMessage function to manually add messages to conversation history
Release 2025-05-12 (1.9.35)
Features
- JSON Schema Structured Output Support:
- Added support for structured output using JSON Schema in GPT responses
- Control response format precisely using schema definitions
- Enforce strict typing and validation with the
strict
option - Full documentation available here
- Speech Recognition Enhancements:
- Enhanced
getDetectedLanguage
function - now returns the text on which language was detected
- Enhanced
- Node.js SDK Updates:
- @dasha.ai/sdk v0.11.3:
- Added ability to pass only function descriptions without handlers for JSON Schema overriding, with implementation handled on the DSL side
- @dasha.ai/sdk v0.11.3:
Configuration Examples
JSON Schema Structured Output Example:
var schema = { name: "result", strict: true, description: "The result of the AMD detection", schema: { @type: "object", properties: { result: { @type: "string", description: "The result of the AMD detection", enum: ["Human", "IVR", "AMD", "Voicemail"] } }, additionalProperties: false, required: ["result"] } }.toString(); var ask = #askGPT($amdPrompt, { model: "openai/gpt-4.1-nano", function_call: "none", history_length: 0, save_response_in_history: false, response_format: schema }, promptName: "ivr_capturer"); var response = (ask.responseText.parseJSON() as { result: string; })?.result ?? "Human";
Release 2025-04-24 (1.9.32)
Features
- QoL LLM features:
- Added option
include_tool_calls
to getTranscription and getFormattedTranscription built-in functions. This option is disabled by default. - Added option
use_dynamic_tools
(boolean type) to gptOptions in answerWithGPT/askGPT built-in functions. This option is enabled by default. - Added built-in function getAvailableTools
- Added option
- Node.js SDK Updates:
- @dasha.ai/sdk v0.11.2:
- Added ability to provide custom JavaScript functions to the conversation context, enabling seamless integration between Dasha Script and Node.js code
- Functions defined in the SDK can now be called directly from Dasha Script, allowing for more powerful application logic and external system integrations
- Support for complex data types as parameters and return values for enhanced data handling
- SDK Events support with new documentation on sending events from the Node.js SDK to DashaScript and handling them with
when SDKEvent
oronSDKEvent
digressions
- @dasha.ai/sdk v0.11.2:
Configuration Examples
SDK Function Integration Example:
import { Type } from "@sinclair/typebox"; app.queue.on("ready", async (id, conv, info) => { // Register a dynamic tool that GPT can call during the conversation await conv.setDynamicTool({ // Function name that GPT will use to call this tool name: "getFruitPrice", // Clear description of what the function does - this helps GPT understand when to use it description: 'Returns the price of a fruit based on its name and optional quantity', // Schema definition using TypeBox to define parameters and their types schema: Type.Object({ name: Type.String({ description: "Name of the fruit"}), count: Type.Optional(Type.Number({ description: "Count of the fruits, defaults to 1 if not provided"})) }) }, async (args, conv) => { // Implementation of the function - this code runs when GPT calls getFruitPrice console.log(`getFruitPrice called with args: ${JSON.stringify(args)}`); if (args.name === "apple") { return 3.25 * (args.count ?? 1); } if (args.name === "orange") { return 6 * (args.count ?? 1); } return "No such fruit"; }); const result = await conv.execute(); }
Release 2025-04-18 (1.9.31)
Features
- Parse JSON function to string type
- LLM Support:
- Added support for the model
openai/o4-mini
and foropenai/gpt-4.1
,openai/gpt-4.1-mini
,openai/gpt-4.1-nano
- Added support for the model
- Interrupt Enhancements:
- Enhanced interrupt behavior by intent for the
#answerWithGPT
function. The function can now be interrupted before it begins speaking
- Enhanced interrupt behavior by intent for the
Release 2025-04-04 (1.9.28)
Features
- ElevenLabs Enhancements:
- Added
seed
parameter tooptions
(uint32 value in range 0..4294967295) - If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
- Added
- LLM Support:
- Added support for reasoning models like
openai/o3-mini
- Added option
reasoning_effort
with valueslow
,medium
,high
for reasoning models
- Added support for reasoning models like
- TTS Enhancements:
- Added parameter to
options
"normalize_first": 1
to apply streaming volume normalization
- Added parameter to
Configuration Examples
ElevenLabs Configuration with Seed Example:
{ "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "use_speaker_boost": true, "seed": 42, "normalize_first": 1 } }
Reasoning Model Configuration Example:
#answerWithGPT($prompt, gptOptions: { "model": "openai/o3-mini", "reasoning_effort": "high" });
Release 2025-03-21 (1.9.26)
Features
- ElevenLabs Enhancements:
- Added speech rate control with
speed
parameter (supports values from 0.7 to 1.2) - Added support for
force_language
parameter to explicitly override language detection and enforce a specific language code
- Added speech rate control with
- Improved Tone Detection:
- Decreased false-positive rate
- Added
vad_suppress_on_tone
option to disable Voice Activity Detection (VAD) and Automatic Speech Recognition (ASR) when tones are detected
Node.js SDK Updates
- @dasha.ai/sdk v0.11.1:
- Added xHeaders support for incoming SIP calls (dictionary with SIP
X-*
headers in received SIP Invite)
- Added xHeaders support for incoming SIP calls (dictionary with SIP
Bug Fixes
- Fixed VoIP configuration removal issue in playground
- Fixed language detection
confidence
field output
Configuration Examples
ElevenLabs Configuration Example:
{ "speed": 1.1, "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "style": 0.3, "use_speaker_boost": true, "optimize_streaming_latency": 4, "force_language": true } }
Tone Detection Configuration Example:
#connectSafe($endpoint, { vad_suppress_on_tone: "true", tone_detector: "400, 440, 480", vad: "asap_v1" }
Release 2025-02-13 (1.9.20)
Features
- Voice Technology:
- Added voice cloning support on the TTS Playground
- AI Models:
- Added
deepseek/*
model family support - Added automatic thinking parsing for Deepseek models (including when using Groq as provider)
- Added
- UI Improvements:
- Enhanced Inspector tool with prompt name visibility
Bug Fixes
- Fixed search functionality crash on the documentation site
Release 2025-02-04 (1.9.17)
Features
- Released Node.js packages (requires Node.js 18+):
- @dasha.ai/sdk v0.11.0 - Node.js SDK
- @dasha.ai/cli v0.7.0 - Command Line Interface
- Added authentication enhancements:
- Password recovery functionality
- Single Sign-On (SSO) support with Google authentication
Fixes
- Fixed GPT context loss in chained function calls when executing pattern: functionCall -> GPT answer/ask -> subsequent functionCall
Release 2025-01-25 (1.9.15)
Features
- Added speech to text control functions
- Added API for voice cloning for
ElevenLabs
andCartesia
- Added support for string-literal keys in object definitions, such as
{ "Content-Type": "application/json", Authorization: "Bearer ... "}
.
Fixes
- Resolved conversation crash when invoking httpRequest with the
useSdk: false
option and a specifiedContent-Type
header. - Fixed Javadoc parsing issue for GPT function definitions with multiline comments.
Found a mistake? Let us know.