Skip to main content
This quickstart shows how to add an Anam avatar face to a LiveKit voice agent using OpenAI Realtime for the LLM.

Prerequisites

Set up the agent

Clone the LiveKit Node.js agent starter and install dependencies:
git clone https://github.com/livekit-examples/agent-starter-node.git
cd agent-starter-node
pnpm install
Download the required model files (VAD and turn detection):
pnpm run download-files
Install the Anam plugin:
pnpm add @livekit/agents-plugin-anam

Configure credentials

Create a .env.local file:
.env.local
# LiveKit Cloud credentials (from cloud.livekit.io)
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your_api_key
LIVEKIT_API_SECRET=your_api_secret

# OpenAI (for voice + LLM)
OPENAI_API_KEY=your_openai_key

# Anam (for avatar face)
ANAM_API_KEY=your_anam_key
ANAM_AVATAR_ID=edf6fdcb-acab-44b8-b974-ded72665ee26
The avatar ID above is “Mia”, one of Anam’s stock avatars. Browse others in the Avatar Gallery or create your own at lab.anam.ai/avatars.

Add the avatar to your agent

Replace the contents of src/agent.ts:
import { type JobContext, ServerOptions, cli, defineAgent, voice } from '@livekit/agents';
import * as anam from '@livekit/agents-plugin-anam';
import * as openai from '@livekit/agents-plugin-openai';
import { BackgroundVoiceCancellation } from '@livekit/noise-cancellation-node';
import dotenv from 'dotenv';
import { fileURLToPath } from 'node:url';

dotenv.config({ path: '.env.local' });

class Assistant extends voice.Agent {
  constructor() {
    super({
      instructions: `You are a helpful voice AI assistant.
You eagerly assist users with their questions.
Your responses are concise, to the point, and without any complex formatting or punctuation including emojis, asterisks, or other symbols.
You are curious, friendly, and have a sense of humor.`,
    });
  }
}

export default defineAgent({
  entry: async (ctx: JobContext) => {
    await ctx.connect();

    // Start the voice session with OpenAI Realtime
    const session = new voice.AgentSession({
      llm: new openai.realtime.RealtimeModel({ voice: 'alloy' }),
    });

    await session.start({
      agent: new Assistant(),
      room: ctx.room,
      inputOptions: {
        noiseCancellation: BackgroundVoiceCancellation(),
      },
    });

    // Start the Anam avatar session
    const avatarId = process.env.ANAM_AVATAR_ID;
    if (!avatarId) {
      console.warn('ANAM_AVATAR_ID is not set. Avatar will not start.');
      return;
    }

    const avatarSession = new anam.AvatarSession({
      personaConfig: {
        name: 'Mia',
        avatarId,
      },
    });

    await avatarSession.start(session, ctx.room);
    console.log('Agent and avatar session started');
  },
});

cli.runApp(new ServerOptions({ agent: fileURLToPath(import.meta.url) }));

Test locally

pnpm run dev
The agent connects to LiveKit Cloud and waits for rooms. You need a frontend to create a room and connect.

Set up the frontend

In a new terminal, create the React frontend:
lk app create --template agent-starter-react
cd agent-starter-react
pnpm install
Create a .env.local with your LiveKit credentials:
.env.local
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your_api_key
LIVEKIT_API_SECRET=your_api_secret
Start the dev server:
pnpm dev
Open http://localhost:3000, click connect, and the avatar appears as the agent speaks.

Deploy to LiveKit Cloud

lk agent deploy --secrets-file=.env.local
This uploads your agent code and environment variables. The agent will now automatically join any rooms created in your project.

Next steps