Release Notes

Release 2025-04-24 (1.9.32)

Features

  • QoL LLM features:
    • Added option include_tool_calls to getTranscription and getFormattedTranscription built-in functions. This option is disabled by default.
    • Added option use_dynamic_tools (boolean type) to gptOptions in answerWithGPT/askGPT built-in functions. This option is enabled by default.
    • Added built-in function getAvailableTools
  • Node.js SDK Updates:
    • @dasha.ai/sdk v0.11.2:
      • Added ability to provide custom JavaScript functions to the conversation context, enabling seamless integration between Dasha Script and Node.js code
      • Functions defined in the SDK can now be called directly from Dasha Script, allowing for more powerful application logic and external system integrations
      • Support for complex data types as parameters and return values for enhanced data handling
      • SDK Events support with new documentation on sending events from the Node.js SDK to DashaScript and handling them with when SDKEvent or onSDKEvent digressions

Configuration Examples

SDK Function Integration Example:

import { Type } from "@sinclair/typebox"; app.queue.on("ready", async (id, conv, info) => { // Register a dynamic tool that GPT can call during the conversation await conv.setDynamicTool({ // Function name that GPT will use to call this tool name: "getFruitPrice", // Clear description of what the function does - this helps GPT understand when to use it description: 'Returns the price of a fruit based on its name and optional quantity', // Schema definition using TypeBox to define parameters and their types schema: Type.Object({ name: Type.String({ description: "Name of the fruit"}), count: Type.Optional(Type.Number({ description: "Count of the fruits, defaults to 1 if not provided"})) }) }, async (args, conv) => { // Implementation of the function - this code runs when GPT calls getFruitPrice console.log(`getFruitPrice called with args: ${JSON.stringify(args)}`); if (args.name === "apple") { return 3.25 * (args.count ?? 1); } if (args.name === "orange") { return 6 * (args.count ?? 1); } return "No such fruit"; }); const result = await conv.execute(); }

Release 2025-04-18 (1.9.31)

Features

  • Parse JSON function to string type
  • LLM Support:
    • Added support for the model openai/o4-mini and for openai/gpt-4.1, openai/gpt-4.1-mini, openai/gpt-4.1-nano
  • Interrupt Enhancements:
    • Enhanced interrupt behavior by intent for the #answerWithGPT function. The function can now be interrupted before it begins speaking

Release 2025-04-04 (1.9.28)

Features

  • ElevenLabs Enhancements:
    • Added seed parameter to options (uint32 value in range 0..4294967295) - If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
  • LLM Support:
    • Added support for reasoning models like openai/o3-mini
    • Added option reasoning_effort with values low, medium, high for reasoning models
  • TTS Enhancements:
    • Added parameter to options "normalize_first": 1 to apply streaming volume normalization

Configuration Examples

ElevenLabs Configuration with Seed Example:

{ "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "use_speaker_boost": true, "seed": 42, "normalize_first": 1 } }

Reasoning Model Configuration Example:

#answerWithGPT($prompt, gptOptions: { "model": "openai/o3-mini", "reasoning_effort": "high" });

Release 2025-03-21 (1.9.26)

Features

  • ElevenLabs Enhancements:
    • Added speech rate control with speed parameter (supports values from 0.7 to 1.2)
    • Added support for force_language parameter to explicitly override language detection and enforce a specific language code
  • Improved Tone Detection:
    • Decreased false-positive rate
    • Added vad_suppress_on_tone option to disable Voice Activity Detection (VAD) and Automatic Speech Recognition (ASR) when tones are detected

Node.js SDK Updates

  • @dasha.ai/sdk v0.11.1:
    • Added xHeaders support for incoming SIP calls (dictionary with SIP X-* headers in received SIP Invite)

Bug Fixes

  • Fixed VoIP configuration removal issue in playground
  • Fixed language detection confidence field output

Configuration Examples

ElevenLabs Configuration Example:

{ "speed": 1.1, "speaker": "ElevenLabs/eleven_flash_v2_5/cgSgspJ2msm6clMCkdW9", "lang": "en-US", "options": { "similarity_boost": 0.75, "stability": 0.5, "style": 0.3, "use_speaker_boost": true, "optimize_streaming_latency": 4, "force_language": true } }

Tone Detection Configuration Example:

#connectSafe($endpoint, { vad_suppress_on_tone: "true", tone_detector: "400, 440, 480", vad: "asap_v1" }

Release 2025-02-13 (1.9.20)

Features

  • Voice Technology:
  • AI Models:
    • Added deepseek/* model family support
    • Added automatic thinking parsing for Deepseek models (including when using Groq as provider)
  • UI Improvements:

Bug Fixes

Release 2025-02-04 (1.9.17)

Features

  • Released Node.js packages (requires Node.js 18+):
  • Added authentication enhancements:
    • Password recovery functionality
    • Single Sign-On (SSO) support with Google authentication

Fixes

  • Fixed GPT context loss in chained function calls when executing pattern: functionCall -> GPT answer/ask -> subsequent functionCall

Release 2025-01-25 (1.9.15)

Features

Fixes

  • Resolved conversation crash when invoking httpRequest with the useSdk: false option and a specified Content-Type header.
  • Fixed Javadoc parsing issue for GPT function definitions with multiline comments.
Found a mistake? Let us know.

Enroll in beta

Request invite to our private Beta program for developers to join the waitlist. No spam, we promise.