Release Notes

Release 2025-12-18 (1.9.90)

Features

  • Speech-to-Text Keyword Boosting:
    • Added keywords parameter to #setSttOptions built-in function - allows configuring an array of keywords to boost or suppress recognition probability
    • Added stt_keyword_boost option to #connect and #connectSafe built-in functions - allows setting keyword boosting at connection time
    • Keywords can be specified as plain words or with weights in format "word:weight" where weight ranges from -1 (suppress) to 1 (boost)
    • Note: Some STT providers may not support negative weight values for probability suppression
  • Organization Management System:
    • Implemented new organization invitation system available at https://auth.dasha.ai/manage/access
    • Users can now belong to multiple organizations
    • Admins can invite users and modify permissions through the access management interface
    • Developers have view-only access to organization settings

Configuration Examples

Keyword Boosting with setSttOptions Example:

#setSttOptions(options: { keywords: ["apple", "banana:0.8", "orange:-0.5"] });

Keyword Boosting at Connection Time Example:

#connectSafe($endpoint, { stt_keyword_boost: ["important_term:0.9", "noise_word:-0.7"] });

Important Changes

Organization Management System Update

We have implemented a new organization invitation system, available at https://auth.dasha.ai/manage/access. A key enhancement is that users can now belong to multiple organizations, providing greater flexibility in access management.

Known Issue: Token Invalidation After Organization Change

The Problem:

A token invalidation issue occurred for tokens issued during a specific sequence:

  1. A user signs up via the authentication system
  2. A token is issued, storing the user's information and current organization
  3. The user's organization is later modified using the old method (granting access to a new organization and revoking it from the old one)
  4. The existing token becomes invalid because it still references the revoked organization

Root Cause:

Previously, we determined the organization by checking the user's current information directly. After our update, the system correctly uses the organization data stored inside the token itself. Tokens issued before an organization change become invalid.

Resolution: How to Fix the Issue

If you are affected by this problem (i.e., your token was invalidated due to an organization change), you need to obtain a new API key.

Via the Playground:

  1. Sign out from both the Playground and the Auth dashboard
  2. Sign back in to both the Playground and the Auth dashboard
  3. Generate a new API key at https://playground.dasha.ai/apikey

Via the Dasha CLI:

  1. Run npx @dasha.ai/cli account login to authenticate
  2. Run npx @dasha.ai/cli dasha account info to display your account information, including the active API key

Current System Overview

User & Access Management:

The new access management interface at https://auth.dasha.ai/manage/access provides:

  • Admins can invite users and modify permissions
  • Developers have view-only access

API Key Model:

API keys are linked to individual user accounts and scoped to a specific organization. We do not yet offer organization-level service accounts.

The API key from https://playground.dasha.ai/apikey is a user token with a very long lifespan, similar to a "Personal Token." It is revoked if the user's access to its associated organization is revoked.

Release 2025-09-03 (1.9.60)

Features

  • Voice Interruption Enhancements:
    • Added pauseOnVoiceDelay parameter to connectOptions - allows configuring delay in seconds before stop speaking (start of fade out) when human is interrupting the agent
    • Added pauseOnVoiceFadeTime parameter to connectOptions - controls the number of seconds to fade in and fade out on interrupt
    • Both parameters work in conjunction with pauseOnVoice for more granular control over interruption behavior
  • TTS Enhancements:
    • Added support for custom apikey and endpoint options in TTS provider configurations. Learn more
  • Authentication Enhancements:

Configuration Examples

Voice Interruption Control Example:

#connectSafe($endpoint, { pauseOnVoice: "true", pauseOnVoiceDelay: "0.5", // Wait 0.5 seconds before starting fade out pauseOnVoiceFadeTime: "0.2" // Take 0.2 seconds to fade in/out });

Dynamic Language Change Example:

// Switch to Spanish with ElevenLabs voice #changeLanguage("es-ES", "elevenlabs/model_id/voice_id"); // Switch to French with custom API key and endpoint #changeLanguage("fr-FR", "elevenlabs/model_id/voice_id", options: { apikey: "your-custom-api-key", endpoint: "https://your-custom-tts-endpoint.com" });

Release 2025-07-09 (1.9.46)

Features

  • TTS Enhancements:
    • Added Inworld TTS support - platform now supports Inworld's text-to-speech capabilities with advanced speech modification features
    • Enhanced speech expressiveness through inline tags for emotions, actions, and vocal effects
  • Voice Activity Detection (VAD) Enhancements:
    • Added vad: "asap_v2" support - detects end of voice much faster than previous versions, but may have false positives in the middle of sentences
  • Playground Enhancements:
    • Inspector: Improved accuracy of start of voice and end of voice view for better conversation analysis
  • DSL Language Enhancements:
    • Added findIndex function for arrays - returns the index of the first element that matches the specified value, or -1 if no element matches
    • Supports deep equality comparison for objects to determine element matching
    • Full documentation available here
    • Added merge function for objects - performs a shallow merge of two objects, combining their properties at the top level only
    • Added deepMerge function for objects - performs a deep merge of two objects, combining their properties recursively with nested object merging
    • Full documentation available here

Configuration Examples

Inworld TTS Voice Configuration Example:

{ "speed": 1, "speaker": "Inworld/inworld-tts-1/Alex", "lang": "en-US", "options": { "temperature": 0.8, "pitch": 0 } }

Inworld TTS Speech Modification Example (Prompt):

Insert in the text following tags in each sentence: In the start of sentence: [happy], [sad], [angry], [surprised], [fearful], [disgusted] [laughing], [whispering] In any part of sentence [breathe], [clear_throat], [cough], [laugh], [sigh], [yawn] To make conversation more natural. Avoid newlines after the tag in [].

Release 2025-06-18 (1.9.40)

Features

  • GPT Function Access Control:
    • Added provided_scopes option to control which functions are available to GPT by specifying allowed scopes. Functions can now be organized into logical groups using @scope tag in JavaDoc comments
    • Added except_functions option to explicitly exclude specific functions from being available to GPT, with highest priority over other inclusion rules
    • Enhanced function filtering system with clear priority: scopes → provided_functions → except_functions
    • Full documentation available here
  • LLM Interruption Handling:
    • Enhanced interruption handling system with comprehensive documentation and best practices
    • Added sophisticated silence management with keep_silence() function for preventing interruptions during user dictation
    • Implemented smart wait request handling with handle_wait_request() function for managing user pause requests
    • Improved smart interruption logic that analyzes conversation context to determine appropriate responses
    • Added automatic hello ping disabling during wait periods to prevent unwanted interruptions
    • Full documentation available here
  • OpenAI GPT Enhancements:
    • Added seed parameter support for deterministic sampling - when specified, repeated requests with the same seed and parameters should return more consistent results
    • Added service_tier parameter support for controlling OpenAI latency tiers - allows customers to specify processing tiers including 'auto' (default), 'default', and 'flex' for scale tier subscribers
    • Added support for openrouter/ model prefix - users can now access OpenRouter models by prefixing model names with "openrouter/" (e.g., "openrouter/anthropic/claude-3-sonnet")
    • Added openrouter_models parameter for model routing - allows automatic fallback to alternative models if the primary model is unavailable or rate-limited
    • Added openrouter_provider parameter for provider routing - enables fine-grained control over provider selection, load balancing, and routing preferences (JSON object as string)
  • Playground Enhancements:
    • Added groups management functionality - users can now share limits between applications by organizing them into groups in the Playground Groups interface
  • DSL Language Enhancements:
    • Added tone event support for when constructions - allows immediate reaction when a tone in Hz is received as specified in #connect or #connectSafe call
  • Node.js SDK Updates:
    • @dasha.ai/sdk v0.11.5:
      • Added support for scopes in dynamic tools - developers can now assign scopes to SDK-provided functions for better access control

Bug Fixes

  • HTTP Request Improvements:
    • Fixed httpRequest function behavior when users hang up - HTTP requests will no longer be interrupted and return null when a user disconnects during the request execution

Release 2025-05-19 (1.9.37)

Features

  • Inspector Enhancements:
    • Added information about who closed the call (human or AI) to the Inspector tool
  • Transcript Improvements:
    • Added allow_incomplete option (true or false) to getTranscription, getFormattedTranscription, askGPT and answerWithGPT functions - allows viewing text that is currently being spoken by the human
    • Added include_system_messages option to control visibility of system messages in transcription (default value: true)
  • DSL Type System Improvements:
    • Fixed functions for types built in DSL (allowing calls of functions for other types)
  • History Management:

Release 2025-05-12 (1.9.35)

Features

  • JSON Schema Structured Output Support:
    • Added support for structured output using JSON Schema in GPT responses
    • Control response format precisely using schema definitions
    • Enforce strict typing and validation with the strict option
    • Full documentation available here
  • Speech Recognition Enhancements:
    • Enhanced getDetectedLanguage function - now returns the text on which language was detected
  • Node.js SDK Updates:
    • @dasha.ai/sdk v0.11.3:
      • Added ability to pass only function descriptions without handlers for JSON Schema overriding, with implementation handled on the DSL side

Configuration Examples

JSON Schema Structured Output Example:

var schema = { name: "result", strict: true, description: "The result of the AMD detection", schema: { @type: "object", properties: { result: { @type: "string", description: "The result of the AMD detection", enum: ["Human", "IVR", "AMD", "Voicemail"] } }, additionalProperties: false, required: ["result"] } }.toString(); var ask = #askGPT($amdPrompt, { model: "openai/gpt-4.1-nano", function_call: "none", history_length: 0, save_response_in_history: false, response_format: schema }, promptName: "ivr_capturer"); var response = (ask.responseText.parseJSON() as { result: string; })?.result ?? "Human";

Release 2025-04-24 (1.9.32)

Features

  • QoL LLM features:
    • Added option include_tool_calls to getTranscription and getFormattedTranscription built-in functions. This option is disabled by default.
    • Added option use_dynamic_tools (boolean type) to gptOptions in answerWithGPT/askGPT built-in functions. This option is enabled by default.
    • Added built-in function getAvailableTools
  • Node.js SDK Updates:
    • @dasha.ai/sdk v0.11.2:
      • Added ability to provide custom JavaScript functions to the conversation context, enabling seamless integration between Dasha Script and Node.js code
      • Functions defined in the SDK can now be called directly from Dasha Script, allowing for more powerful application logic and external system integrations
      • Support for complex data types as parameters and return values for enhanced data handling
      • SDK Events support with new documentation on sending events from the Node.js SDK to DashaScript and handling them with when SDKEvent or onSDKEvent digressions

Configuration Examples

SDK Function Integration Example:

import { Type } from "@sinclair/typebox"; app.queue.on("ready", async (id, conv, info) => { // Register a dynamic tool that GPT can call during the conversation await conv.setDynamicTool({ // Function name that GPT will use to call this tool name: "getFruitPrice", // Clear description of what the function does - this helps GPT understand when to use it description: 'Returns the price of a fruit based on its name and optional quantity', // Schema definition using TypeBox to define parameters and their types schema: Type.Object({ name: Type.String({ description: "Name of the fruit"}), count: Type.Optional(Type.Number({ description: "Count of the fruits, defaults to 1 if not provided"})) }) }, async (args, conv) => { // Implementation of the function - this code runs when GPT calls getFruitPrice console.log(`getFruitPrice called with args: ${JSON.stringify(args)}`); if (args.name === "apple") { return 3.25 * (args.count ?? 1); } if (args.name === "orange") { return 6 * (args.count ?? 1); } return "No such fruit"; }); const result = await conv.execute(); }

Release 2025-04-18 (1.9.31)

Features

  • Parse JSON function to string type
  • LLM Support:
    • Added support for the model openai/o4-mini and for openai/gpt-4.1, openai/gpt-4.1-mini, openai/gpt-4.1-nano
  • Interrupt Enhancements:
    • Enhanced interrupt behavior by intent for the #answerWithGPT function. The function can now be interrupted before it begins speaking

Release 2025-04-04 (1.9.28)

Features

  • ElevenLabs Enhancements:
    • Added seed parameter to options (uint32 value in range 0..4294967295) - If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
  • LLM Support:
    • Added support for reasoning models like openai/o3-mini
    • Added option reasoning_effort with values low, medium, high for reasoning models
  • TTS Enhancements:
    • Added parameter to options "normalize_first": 1 to apply streaming volume normalization

Configuration Examples

ElevenLabs Configuration with Seed Example:

{ "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "use_speaker_boost": true, "seed": 42, "normalize_first": 1 } }

Reasoning Model Configuration Example:

#answerWithGPT($prompt, gptOptions: { "model": "openai/o3-mini", "reasoning_effort": "high" });

Release 2025-03-21 (1.9.26)

Features

  • ElevenLabs Enhancements:
    • Added speech rate control with speed parameter (supports values from 0.7 to 1.2)
    • Added support for force_language parameter to explicitly override language detection and enforce a specific language code
  • Improved Tone Detection:
    • Decreased false-positive rate
    • Added vad_suppress_on_tone option to disable Voice Activity Detection (VAD) and Automatic Speech Recognition (ASR) when tones are detected

Node.js SDK Updates

  • @dasha.ai/sdk v0.11.1:
    • Added xHeaders support for incoming SIP calls (dictionary with SIP X-* headers in received SIP Invite)

Bug Fixes

  • Fixed VoIP configuration removal issue in playground
  • Fixed language detection confidence field output

Configuration Examples

ElevenLabs Configuration Example:

{ "speed": 1.1, "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "style": 0.3, "use_speaker_boost": true, "optimize_streaming_latency": 4, "force_language": true } }

Tone Detection Configuration Example:

#connectSafe($endpoint, { vad_suppress_on_tone: "true", tone_detector: "400, 440, 480", vad: "asap_v1" }

Release 2025-02-13 (1.9.20)

Features

  • Voice Technology:
  • AI Models:
    • Added deepseek/* model family support
    • Added automatic thinking parsing for Deepseek models (including when using Groq as provider)
  • UI Improvements:

Bug Fixes

Release 2025-02-04 (1.9.17)

Features

  • Released Node.js packages (requires Node.js 18+):
  • Added authentication enhancements:
    • Password recovery functionality
    • Single Sign-On (SSO) support with Google authentication

Fixes

  • Fixed GPT context loss in chained function calls when executing pattern: functionCall -> GPT answer/ask -> subsequent functionCall

Release 2025-01-25 (1.9.15)

Features

Fixes

  • Resolved conversation crash when invoking httpRequest with the useSdk: false option and a specified Content-Type header.
  • Fixed Javadoc parsing issue for GPT function definitions with multiline comments.
Found a mistake? Let us know.

Enroll in beta

Request invite to our private Beta program for developers to join the waitlist. No spam, we promise.