Conversational AI design with client-side context injection

·

Your avatar agent is mid-conversation with a user when they navigate to your pricing page. The agent keeps talking about onboarding because it has no idea the user just switched contexts. Every answer is technically correct but contextually wrong.

This is a conversational AI design problem. The agent only knows what the user has said. It doesn't know what they're doing.

addContext() fixes that. It's a new method in the Anam JavaScript SDK (v4.11.0) that lets you silently inject information into the conversation history without triggering a response. The persona doesn't reply immediately. But the next time the user speaks, that context shapes the answer.


Why conversational AI design needs context injection

Context injection is a design pattern where your application pushes relevant state to the agent in the background. The persona isn't responding to the context. It's absorbing it, the way a human colleague would notice you've pulled up a different screen.

Without it, you're stuck with two bad options. Either the user manually tells the agent what they're doing ("I'm on your pricing page now, looking at the enterprise plan"), or you call sendUserMessage() on their behalf and the agent responds out of turn. Context injection is the third option: silent, non-disruptive, and available when the agent needs it.

This matters for interactive avatar agents especially. When a user is face-to-face with an avatar, an unprompted response feels jarring. The avatar should be listening, not interrupting. But it should still know what's happening in the application around it.

An interactive avatar that knows you just opened the pricing page and can answer "what's included in the enterprise plan?" without you explaining the context first. That's a better conversational experience, and it's the kind of thing that separates a demo from a product.


How addContext() works

The method lives on AnamClient. Call it any time during an active streaming session:


import { AnamClient } from '@anam-ai/js-sdk';

const client = new AnamClient('your-session-token');
await client.streamToVideoAndAudioElements('video-id', 'audio-id');

// Later, when the user does something relevant:
client.addContext(
  'User navigated to the Enterprise pricing page. ' +
  'They are viewing the annual plan at $499/month.'
);
import { AnamClient } from '@anam-ai/js-sdk';

const client = new AnamClient('your-session-token');
await client.streamToVideoAndAudioElements('video-id', 'audio-id');

// Later, when the user does something relevant:
client.addContext(
  'User navigated to the Enterprise pricing page. ' +
  'They are viewing the annual plan at $499/month.'
);
import { AnamClient } from '@anam-ai/js-sdk';

const client = new AnamClient('your-session-token');
await client.streamToVideoAndAudioElements('video-id', 'audio-id');

// Later, when the user does something relevant:
client.addContext(
  'User navigated to the Enterprise pricing page. ' +
  'They are viewing the annual plan at $499/month.'
);


Under the hood, this sends a message_type: 'context' payload over the WebRTC data channel. Compare that to sendUserMessage(), which sends message_type: 'speech' and triggers an immediate persona response. addContext() appends to the conversation history without prompting the persona to speak.

The method throws if you call it while not streaming or without an active session. One string parameter, no return value, no callback.


Three patterns worth stealing


CRM and user profile data

Load the user's account data when the session starts:


const user = await fetchUserProfile(userId);

client.addContext(
  `This user is ${user.name}, on the ${user.plan} plan since ${user.signupDate}. ` +
  `They have ${user.openTickets} open support tickets. ` +
  `Their last conversation was about ${user.lastTopic}.`
);
const user = await fetchUserProfile(userId);

client.addContext(
  `This user is ${user.name}, on the ${user.plan} plan since ${user.signupDate}. ` +
  `They have ${user.openTickets} open support tickets. ` +
  `Their last conversation was about ${user.lastTopic}.`
);
const user = await fetchUserProfile(userId);

client.addContext(
  `This user is ${user.name}, on the ${user.plan} plan since ${user.signupDate}. ` +
  `They have ${user.openTickets} open support tickets. ` +
  `Their last conversation was about ${user.lastTopic}.`
);


The avatar now greets a returning Enterprise customer differently than a new free-tier user. No one has to explain who they are.


Page navigation events

Push route changes as the user moves through your app:


router.on('routeChange', (route) => {
  client.addContext(`User navigated to ${route.name}: ${route.description}`);
});
router.on('routeChange', (route) => {
  client.addContext(`User navigated to ${route.name}: ${route.description}`);
});
router.on('routeChange', (route) => {
  client.addContext(`User navigated to ${route.name}: ${route.description}`);
});


When the user asks "how does this work?" the avatar knows which "this" they mean. When they ask "what's this cost?" the avatar knows which product page they're on.


Real-time application state

Feed the avatar what the user is looking at right now:


function onProductSelect(product) {
  client.addContext(
    `User is now viewing: ${product.name} ($${product.price}). ` +
    `Category: ${product.category}. In stock: ${product.inStock}.`
  );
}
function onProductSelect(product) {
  client.addContext(
    `User is now viewing: ${product.name} ($${product.price}). ` +
    `Category: ${product.category}. In stock: ${product.inStock}.`
  );
}
function onProductSelect(product) {
  client.addContext(
    `User is now viewing: ${product.name} ($${product.price}). ` +
    `Category: ${product.category}. In stock: ${product.inStock}.`
  );
}


The avatar can answer questions about the selected product without the user having to describe it. Same idea applies to form progress, cart contents, dashboard filters, or anything else the user is actively interacting with.


When to hold back

Don't use addContext() as a replacement for system prompts. The system prompt defines the persona's behavior and personality. Context injection provides runtime information. Different jobs.

Don't flood it either. Every call appends to the conversation history, which means token usage grows. Inject when something meaningful changes, not on every scroll event or mouse move.

And don't use it to fake user input. If you want the user to "say" something, use sendUserMessage(). Context injection is for things the application knows that the user shouldn't have to repeat.


Getting started

Install or update the JavaScript SDK to v4.11.0 or later:


npm
npm
npm


Call addContext() on any active session. The getting started guide covers session setup if you're new. The API reference has the full method signature.

Context injection works with any persona configuration. If you're running turnkey (where Anam handles STT, LLM, TTS, and face generation), the context appears in the managed conversation history. In BYO LLM mode where you control the intelligence layer, the context appears in the conversation history your LLM receives, so you can build custom logic around it.


FAQ


What is context injection in conversational AI?

Context injection is a conversational AI design pattern where your application silently pushes information to the agent during a live session. Instead of the user explaining what they're doing ("I'm on the pricing page"), the application tells the agent directly. The agent absorbs the context without responding and uses it to inform its next reply. It's the difference between a context-blind chatbot and an agent that adapts to what the user is actually doing.


What's the difference between addContext() and sendUserMessage()?

sendUserMessage() simulates the user speaking and triggers an immediate persona response. addContext() silently appends information to the conversation history without triggering any response. Use sendUserMessage() when you want the agent to react immediately. Use addContext() when you want to feed background information (CRM data, navigation events, application state) that the agent should know about but shouldn't respond to directly.


Does addContext() work with Custom LLM?

Yes. In CustomLLM, where you supply your own language model, the injected context appears in the conversation history that your LLM receives. You can build custom logic around it on your side. In turnkey mode, where Anam manages the full pipeline, the context is handled automatically in the managed conversation history. The addContext() API is the same in both cases.


What kind of data should I inject?

Anything that helps the agent respond more accurately to the user's current situation. Common examples: user profile and account data (plan tier, support history), page or route changes within your app, product selections or cart contents, form progress, and real-time application state like dashboard filters. Keep the content string descriptive and factual. The LLM interprets it as natural language, so write it the way you'd brief a human colleague.


Does context injection increase token usage?

Yes. Each addContext() call appends to the conversation history, which grows the token count for subsequent LLM calls. Inject when something meaningful changes, not on every UI event. A good rule of thumb: if a human assistant sitting next to the user would notice the change, it's worth injecting. If they wouldn't, skip it.

Never miss a post

Get new blog entries delivered straight to your inbox.

Never miss a post

Get new blog entries delivered straight to your inbox.

In this article

Table of Content