Changelog

New features, improvements, and fixes.

October 6, 2025

Anam is now HIPAA compliant

A big milestone for our customers and partners. Anam now meets the standards required for HIPAA compliance, the U.S. regulation that protects sensitive health information. This means healthcare organizations and companies handling medical data can use Anam with confidence that their data is protected and processed securely.

What HIP compliance means.

HIPAA (Health Insurance Portability and Accountability Act) sets national standards for safeguarding medical information. Compliance confirms that Anam maintains strict administrative, physical, and technical safeguards, covering how data is stored, encrypted, accessed, and shared.

An independent assessment verified that Anam’s systems and policies meet the HIPAA Security Rule and Privacy Rule requirements.

Security is built into Anam.

Security has been a core principle since day one. Achieving HIPAA compliance reinforces our commitment to keeping your data private and secure while ensuring reliability and transparency for regulated industries.

Access our Trust Center.

You can review our security policies, data handling procedures, subprocessors, and compliance documentation, including our HIPAA attestation, at the Anam Trust Center.

Improvements

Enhanced voice selection

You can now search voices by your use case or conversational style! We also support 50+ languages that can now be previewed in the lab all at once.

Product tour update

We updated our product tour to help you find the right features and find the right plans for you.

Streamlined One-Shot avatar creation

Redesigned one-shot flow with clearer step progression and enhanced mobile responsiveness.

Naming personas is now automatic

Auto-generated new persona names based on selected avatar.

Session start time

Expected improvement by 1.1 sec for each session start up time.

Fixes

Share links

Fixed share-link sessions taking extra concurrency slots.

Improvements

Improve tts pronunciation

Improve tts pronunciation for all languages by adapting our input text chunking.

Traceability and monitoring of session IDs

Send session IDs through all LLM calls to improve traceability and monitoring.

Increased audio sampling rate

Internal Audio Sampling Rate increased from 16khz to 24khz sampling rate, allowing even more amazing audio for Anam Personas.

Websocket size increase

Increased the maximum websocket size for larger talk stream chunks (from 1Mb to 16Mb).

Fixes

Concurrency calculation fix

Fixed concurrency calculation to only consider sessions from last 2 hours.

Less freezing for slower LLMs

Slower LLMs will now result in less freezing, but shorter "chunks" of speech.

September 21, 2025

Session Analytics

Once a conversation ends, how do you review what happened? To help you understand and improve your Persona's performance, we're launching Session Analytics in the Lab. Now you can access a detailed report for every conversation, complete with a full transcript, performance metrics, and AI-powered analysis.

Session analytics

  • Full Conversation Transcripts. Review every turn of a conversation with a complete, time-stamped transcript. See what the user said and how your Persona responded, making it easy to diagnose issues and identify successful interaction patterns.
  • Detailed Analytics & Timeline. Alongside the transcript, a new Analytics tab provides key metrics grouped into "Transcript Metrics" (word count, turns) and "Processing Metrics" (e.g., LLM latency). A visual timeline charts the entire conversation, showing who spoke when and highlighting any technical warnings.
  • AI-Powered Insights. For a deeper analysis, you can generate an AI-powered summary and review key insights. This feature, currently powered by gpt-5-mini, evaluates the conversation for highlights, adherence to the system prompt, and user interruption rates.

You can find your session history on the Sessions page in the Lab. Click on any past session to explore the new analytics report. This is available today for all session types, except for LiveKit sessions. For privacy-sensitive applications, session logging can be disabled via the SDK.

Improvements

Improved Voice Discovery

The Voices page has been updated to be more searchable, allowing you to preview voices with a single click, and view new details like gender, TTS-model and language.

Fixes

Fixed share-link session bug

Fixed bug of share-link sessions taking an extra concurrency slot.

Improvements

Small improvement to connection time

Tweaks to how we perform webrtc signalling which allows for slightly faster connection times (~900ms faster for p95 connection time).

Improvement to output audio quality for poor connections

Enabled Opus in-band FEC to improve audio quality under packet loss.

Small reduction in network latency

Optimisations have been made to our outbound media streams to reduce A/V jitter (and hence jitter buffer delay). Expected latency improvement is modest (<50ms).

Fixes

Fix for livekit sessions with slow TTS audio

Stabilizes LiveKit streaming by pacing output and duplicating frames during slowdowns to prevent underflow.

September 11, 2025

Intelligent LLM Routing for Faster Responses

The performance of LLM endpoints can be highly variable, with time-to-first-token latencies sometimes fluctuating by as much as 500ms from one day to the next depending on regional load. To solve this and ensure your personas respond as quickly and reliably as possible, we've rolled out a new intelligent routing system for LLM requests. This is active for both our turnkey customers and for customers using their own server-side Custom LLMs if they deploy multiple endpoints.

LLM config options

This new system constantly monitors the health and performance of all configured LLM endpoints by sending lightweight probes at regular intervals. Using a time-aware moving average, it builds a real-time picture of network latency and processing speed for each endpoint. When a request is made, the system uses this data to calculate the optimal route, automatically shedding load from any overloaded or slow endpoints within a region.

Improvements

Generate one-shot avatars from text prompts

You can now generate one-shot avatars from text prompts within the lab, powered by Gemini’s new Nano Banana model. The one-shot creation flow has been redesigned for speed and ease-of-use, and is now available to all plans. Image upload and webcam avatars remain exclusive to Pro and Enterprise.

One shot text to image

Improved management of published embed widgets

Published embed widgets can now be configured and monitored from the lab at https://lab.anam.ai/personas/published.

Improvements

Automatic failover to backup data centres

To ensure maximum uptime and reliability for our personas, we’ve implemented automatic failover to backup data centres.

Fixes

Prevent session crash on long user speech

Previously, unbroken user speech exceeding 30 seconds would trigger a transcription error and crash the session. We now automatically truncate continuous speech to 30 seconds, preventing sessions from failing in these rare cases.

Allow configurable session lengths of up to 2 hours for Enterprise plans

We had a bug where sessions had a max timeout of 30 mins instead of 2 hours for enterprise plans. This has now been fixed.

Resolved slow connection times caused by incorrect database region selection

An undocumented issue with our database provider led to incorrect region selection for our databases. Simply refreshing our credentials resolved the problem, resulting in a ~1s improvement in median connection times and ~3s faster p95 times. While our provider works on a permanent fix, we’re actively monitoring for any recurrence.

September 4, 2025

Embed Widget

Embed personas directly into your website with our new widget. Within the lab's build page click Publish then generate your unique html snippet. This snippet will work in most common website builders, eg Wordpress.org or SquareSpace.

For added security we recommend adding a whitelist with your domain url. This will lock down the persona to only work on your website. You can also cap the number of sessions or give the widget an expiration period.

Improvements

ONE-SHOT avatars available via API

Professional and Enterprise accounts can now create one-shot avatars via API. Docs here.

Improvements

Spend caps

It's now possible to set a spend cap on your account. Available in profile settings.

Fixes

Prevent Cartesia from timing out when using slow custom LLMs.

We’ve added a safeguard to prevent Cartesia contexts from unexpectedly closing during pauses in text streaming. With slower llms or if there’s a break or slow-down in text being sent, your connection will now stay alive, ensuring smoother, uninterrupted interactions.