Release Notes
Release 2025-12-18 (1.9.90)
Features
- Speech-to-Text Keyword Boosting:
- Added
keywordsparameter to#setSttOptionsbuilt-in function - allows configuring an array of keywords to boost or suppress recognition probability - Added
stt_keyword_boostoption to#connectand#connectSafebuilt-in functions - allows setting keyword boosting at connection time - Keywords can be specified as plain words or with weights in format
"word:weight"where weight ranges from -1 (suppress) to 1 (boost) - Note: Some STT providers may not support negative weight values for probability suppression
- Added
- Organization Management System:
- Implemented new organization invitation system available at https://auth.dasha.ai/manage/access
- Users can now belong to multiple organizations
- Admins can invite users and modify permissions through the access management interface
- Developers have view-only access to organization settings
Configuration Examples
Keyword Boosting with setSttOptions Example:
#setSttOptions(options: { keywords: ["apple", "banana:0.8", "orange:-0.5"] });
Keyword Boosting at Connection Time Example:
#connectSafe($endpoint, { stt_keyword_boost: ["important_term:0.9", "noise_word:-0.7"] });
Important Changes
Organization Management System Update
We have implemented a new organization invitation system, available at https://auth.dasha.ai/manage/access. A key enhancement is that users can now belong to multiple organizations, providing greater flexibility in access management.
Known Issue: Token Invalidation After Organization Change
The Problem:
A token invalidation issue occurred for tokens issued during a specific sequence:
- A user signs up via the authentication system
- A token is issued, storing the user's information and current organization
- The user's organization is later modified using the old method (granting access to a new organization and revoking it from the old one)
- The existing token becomes invalid because it still references the revoked organization
Root Cause:
Previously, we determined the organization by checking the user's current information directly. After our update, the system correctly uses the organization data stored inside the token itself. Tokens issued before an organization change become invalid.
Resolution: How to Fix the Issue
If you are affected by this problem (i.e., your token was invalidated due to an organization change), you need to obtain a new API key.
Via the Playground:
- Sign out from both the Playground and the Auth dashboard
- Sign back in to both the Playground and the Auth dashboard
- Generate a new API key at https://playground.dasha.ai/apikey
Via the Dasha CLI:
- Run
npx @dasha.ai/cli account loginto authenticate - Run
npx @dasha.ai/cli dasha account infoto display your account information, including the active API key
Current System Overview
User & Access Management:
The new access management interface at https://auth.dasha.ai/manage/access provides:
- Admins can invite users and modify permissions
- Developers have view-only access
API Key Model:
API keys are linked to individual user accounts and scoped to a specific organization. We do not yet offer organization-level service accounts.
The API key from https://playground.dasha.ai/apikey is a user token with a very long lifespan, similar to a "Personal Token." It is revoked if the user's access to its associated organization is revoked.
Release 2025-09-03 (1.9.60)
Features
- Voice Interruption Enhancements:
- Added
pauseOnVoiceDelayparameter toconnectOptions- allows configuring delay in seconds before stop speaking (start of fade out) when human is interrupting the agent - Added
pauseOnVoiceFadeTimeparameter toconnectOptions- controls the number of seconds to fade in and fade out on interrupt - Both parameters work in conjunction with
pauseOnVoicefor more granular control over interruption behavior
- Added
- TTS Enhancements:
- Added support for custom
apikeyandendpointoptions in TTS provider configurations. Learn more
- Added support for custom
- Authentication Enhancements:
- Added personal tokens management page for secure API access at https://auth.dasha.ai/PersonalTokens
Configuration Examples
Voice Interruption Control Example:
#connectSafe($endpoint, { pauseOnVoice: "true", pauseOnVoiceDelay: "0.5", // Wait 0.5 seconds before starting fade out pauseOnVoiceFadeTime: "0.2" // Take 0.2 seconds to fade in/out });
Dynamic Language Change Example:
// Switch to Spanish with ElevenLabs voice #changeLanguage("es-ES", "elevenlabs/model_id/voice_id"); // Switch to French with custom API key and endpoint #changeLanguage("fr-FR", "elevenlabs/model_id/voice_id", options: { apikey: "your-custom-api-key", endpoint: "https://your-custom-tts-endpoint.com" });
Release 2025-07-09 (1.9.46)
Features
- TTS Enhancements:
- Added Inworld TTS support - platform now supports Inworld's text-to-speech capabilities with advanced speech modification features
- Enhanced speech expressiveness through inline tags for emotions, actions, and vocal effects
- Voice Activity Detection (VAD) Enhancements:
- Added
vad: "asap_v2"support - detects end of voice much faster than previous versions, but may have false positives in the middle of sentences
- Added
- Playground Enhancements:
- Inspector: Improved accuracy of start of voice and end of voice view for better conversation analysis
- DSL Language Enhancements:
- Added
findIndexfunction for arrays - returns the index of the first element that matches the specified value, or -1 if no element matches - Supports deep equality comparison for objects to determine element matching
- Full documentation available here
- Added
mergefunction for objects - performs a shallow merge of two objects, combining their properties at the top level only - Added
deepMergefunction for objects - performs a deep merge of two objects, combining their properties recursively with nested object merging - Full documentation available here
- Added
Configuration Examples
Inworld TTS Voice Configuration Example:
{ "speed": 1, "speaker": "Inworld/inworld-tts-1/Alex", "lang": "en-US", "options": { "temperature": 0.8, "pitch": 0 } }
Inworld TTS Speech Modification Example (Prompt):
Insert in the text following tags in each sentence: In the start of sentence: [happy], [sad], [angry], [surprised], [fearful], [disgusted] [laughing], [whispering] In any part of sentence [breathe], [clear_throat], [cough], [laugh], [sigh], [yawn] To make conversation more natural. Avoid newlines after the tag in [].
Release 2025-06-18 (1.9.40)
Features
- GPT Function Access Control:
- Added
provided_scopesoption to control which functions are available to GPT by specifying allowed scopes. Functions can now be organized into logical groups using@scopetag in JavaDoc comments - Added
except_functionsoption to explicitly exclude specific functions from being available to GPT, with highest priority over other inclusion rules - Enhanced function filtering system with clear priority: scopes → provided_functions → except_functions
- Full documentation available here
- Added
- LLM Interruption Handling:
- Enhanced interruption handling system with comprehensive documentation and best practices
- Added sophisticated silence management with
keep_silence()function for preventing interruptions during user dictation - Implemented smart wait request handling with
handle_wait_request()function for managing user pause requests - Improved smart interruption logic that analyzes conversation context to determine appropriate responses
- Added automatic hello ping disabling during wait periods to prevent unwanted interruptions
- Full documentation available here
- OpenAI GPT Enhancements:
- Added
seedparameter support for deterministic sampling - when specified, repeated requests with the same seed and parameters should return more consistent results - Added
service_tierparameter support for controlling OpenAI latency tiers - allows customers to specify processing tiers including 'auto' (default), 'default', and 'flex' for scale tier subscribers - Added support for
openrouter/model prefix - users can now access OpenRouter models by prefixing model names with "openrouter/" (e.g., "openrouter/anthropic/claude-3-sonnet") - Added
openrouter_modelsparameter for model routing - allows automatic fallback to alternative models if the primary model is unavailable or rate-limited - Added
openrouter_providerparameter for provider routing - enables fine-grained control over provider selection, load balancing, and routing preferences (JSON object as string)
- Added
- Playground Enhancements:
- Added groups management functionality - users can now share limits between applications by organizing them into groups in the Playground Groups interface
- DSL Language Enhancements:
- Added
toneevent support forwhenconstructions - allows immediate reaction when a tone in Hz is received as specified in#connector#connectSafecall
- Added
- Node.js SDK Updates:
- @dasha.ai/sdk v0.11.5:
- Added support for scopes in dynamic tools - developers can now assign scopes to SDK-provided functions for better access control
- @dasha.ai/sdk v0.11.5:
Bug Fixes
- HTTP Request Improvements:
- Fixed
httpRequestfunction behavior when users hang up - HTTP requests will no longer be interrupted and return null when a user disconnects during the request execution
- Fixed
Release 2025-05-19 (1.9.37)
Features
- Inspector Enhancements:
- Added information about who closed the call (human or AI) to the Inspector tool
- Transcript Improvements:
- Added
allow_incompleteoption (trueorfalse) togetTranscription,getFormattedTranscription,askGPTandanswerWithGPTfunctions - allows viewing text that is currently being spoken by the human - Added
include_system_messagesoption to control visibility of system messages in transcription (default value:true)
- Added
- DSL Type System Improvements:
- Fixed functions for types built in DSL (allowing calls of functions for other types)
- History Management:
- Added addHistoryMessage function to manually add messages to conversation history
Release 2025-05-12 (1.9.35)
Features
- JSON Schema Structured Output Support:
- Added support for structured output using JSON Schema in GPT responses
- Control response format precisely using schema definitions
- Enforce strict typing and validation with the
strictoption - Full documentation available here
- Speech Recognition Enhancements:
- Enhanced
getDetectedLanguagefunction - now returns the text on which language was detected
- Enhanced
- Node.js SDK Updates:
- @dasha.ai/sdk v0.11.3:
- Added ability to pass only function descriptions without handlers for JSON Schema overriding, with implementation handled on the DSL side
- @dasha.ai/sdk v0.11.3:
Configuration Examples
JSON Schema Structured Output Example:
var schema = { name: "result", strict: true, description: "The result of the AMD detection", schema: { @type: "object", properties: { result: { @type: "string", description: "The result of the AMD detection", enum: ["Human", "IVR", "AMD", "Voicemail"] } }, additionalProperties: false, required: ["result"] } }.toString(); var ask = #askGPT($amdPrompt, { model: "openai/gpt-4.1-nano", function_call: "none", history_length: 0, save_response_in_history: false, response_format: schema }, promptName: "ivr_capturer"); var response = (ask.responseText.parseJSON() as { result: string; })?.result ?? "Human";
Release 2025-04-24 (1.9.32)
Features
- QoL LLM features:
- Added option
include_tool_callsto getTranscription and getFormattedTranscription built-in functions. This option is disabled by default. - Added option
use_dynamic_tools(boolean type) to gptOptions in answerWithGPT/askGPT built-in functions. This option is enabled by default. - Added built-in function getAvailableTools
- Added option
- Node.js SDK Updates:
- @dasha.ai/sdk v0.11.2:
- Added ability to provide custom JavaScript functions to the conversation context, enabling seamless integration between Dasha Script and Node.js code
- Functions defined in the SDK can now be called directly from Dasha Script, allowing for more powerful application logic and external system integrations
- Support for complex data types as parameters and return values for enhanced data handling
- SDK Events support with new documentation on sending events from the Node.js SDK to DashaScript and handling them with
when SDKEventoronSDKEventdigressions
- @dasha.ai/sdk v0.11.2:
Configuration Examples
SDK Function Integration Example:
import { Type } from "@sinclair/typebox"; app.queue.on("ready", async (id, conv, info) => { // Register a dynamic tool that GPT can call during the conversation await conv.setDynamicTool({ // Function name that GPT will use to call this tool name: "getFruitPrice", // Clear description of what the function does - this helps GPT understand when to use it description: 'Returns the price of a fruit based on its name and optional quantity', // Schema definition using TypeBox to define parameters and their types schema: Type.Object({ name: Type.String({ description: "Name of the fruit"}), count: Type.Optional(Type.Number({ description: "Count of the fruits, defaults to 1 if not provided"})) }) }, async (args, conv) => { // Implementation of the function - this code runs when GPT calls getFruitPrice console.log(`getFruitPrice called with args: ${JSON.stringify(args)}`); if (args.name === "apple") { return 3.25 * (args.count ?? 1); } if (args.name === "orange") { return 6 * (args.count ?? 1); } return "No such fruit"; }); const result = await conv.execute(); }
Release 2025-04-18 (1.9.31)
Features
- Parse JSON function to string type
- LLM Support:
- Added support for the model
openai/o4-miniand foropenai/gpt-4.1,openai/gpt-4.1-mini,openai/gpt-4.1-nano
- Added support for the model
- Interrupt Enhancements:
- Enhanced interrupt behavior by intent for the
#answerWithGPTfunction. The function can now be interrupted before it begins speaking
- Enhanced interrupt behavior by intent for the
Release 2025-04-04 (1.9.28)
Features
- ElevenLabs Enhancements:
- Added
seedparameter tooptions(uint32 value in range 0..4294967295) - If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
- Added
- LLM Support:
- Added support for reasoning models like
openai/o3-mini - Added option
reasoning_effortwith valueslow,medium,highfor reasoning models
- Added support for reasoning models like
- TTS Enhancements:
- Added parameter to
options"normalize_first": 1to apply streaming volume normalization
- Added parameter to
Configuration Examples
ElevenLabs Configuration with Seed Example:
{ "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "use_speaker_boost": true, "seed": 42, "normalize_first": 1 } }
Reasoning Model Configuration Example:
#answerWithGPT($prompt, gptOptions: { "model": "openai/o3-mini", "reasoning_effort": "high" });
Release 2025-03-21 (1.9.26)
Features
- ElevenLabs Enhancements:
- Added speech rate control with
speedparameter (supports values from 0.7 to 1.2) - Added support for
force_languageparameter to explicitly override language detection and enforce a specific language code
- Added speech rate control with
- Improved Tone Detection:
- Decreased false-positive rate
- Added
vad_suppress_on_toneoption to disable Voice Activity Detection (VAD) and Automatic Speech Recognition (ASR) when tones are detected
Node.js SDK Updates
- @dasha.ai/sdk v0.11.1:
- Added xHeaders support for incoming SIP calls (dictionary with SIP
X-*headers in received SIP Invite)
- Added xHeaders support for incoming SIP calls (dictionary with SIP
Bug Fixes
- Fixed VoIP configuration removal issue in playground
- Fixed language detection
confidencefield output
Configuration Examples
ElevenLabs Configuration Example:
{ "speed": 1.1, "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "style": 0.3, "use_speaker_boost": true, "optimize_streaming_latency": 4, "force_language": true } }
Tone Detection Configuration Example:
#connectSafe($endpoint, { vad_suppress_on_tone: "true", tone_detector: "400, 440, 480", vad: "asap_v1" }
Release 2025-02-13 (1.9.20)
Features
- Voice Technology:
- Added voice cloning support on the TTS Playground
- AI Models:
- Added
deepseek/*model family support - Added automatic thinking parsing for Deepseek models (including when using Groq as provider)
- Added
- UI Improvements:
- Enhanced Inspector tool with prompt name visibility
Bug Fixes
- Fixed search functionality crash on the documentation site
Release 2025-02-04 (1.9.17)
Features
- Released Node.js packages (requires Node.js 18+):
- @dasha.ai/sdk v0.11.0 - Node.js SDK
- @dasha.ai/cli v0.7.0 - Command Line Interface
- Added authentication enhancements:
- Password recovery functionality
- Single Sign-On (SSO) support with Google authentication
Fixes
- Fixed GPT context loss in chained function calls when executing pattern: functionCall -> GPT answer/ask -> subsequent functionCall
Release 2025-01-25 (1.9.15)
Features
- Added speech to text control functions
- Added API for voice cloning for
ElevenLabsandCartesia - Added support for string-literal keys in object definitions, such as
{ "Content-Type": "application/json", Authorization: "Bearer ... "}.
Fixes
- Resolved conversation crash when invoking httpRequest with the
useSdk: falseoption and a specifiedContent-Typeheader. - Fixed Javadoc parsing issue for GPT function definitions with multiline comments.