Skip to main content
This quickstart shows you how to connect to an Anam persona and receive audio and video frames.

Prerequisites

  • An Anam API key (get one here)
  • A persona ID from Anam Lab
  • The anam package installed (pip install anam)

Connect and stream

import asyncio
from anam import AnamClient

async def main():
    client = AnamClient(
        api_key="your-api-key",
        persona_id="your-persona-id",
    )

    async with client.connect() as session:
        async def consume_video():
            async for frame in session.video_frames():
                img = frame.to_ndarray(format="rgb24")
                print(f"Video: {frame.width}x{frame.height}")

        async def consume_audio():
            async for frame in session.audio_frames():
                samples = frame.to_ndarray()
                print(f"Audio: {samples.size} samples")

        await asyncio.gather(consume_video(), consume_audio())

asyncio.run(main())
This connects to the persona and prints frame metadata as it arrives. Replace the print calls with your own rendering logic.
Never expose your API key in client-side code. The Python SDK is designed for server-side use. See Usage in Production for session token patterns.

Send a message

You can send text to the persona during a session:
async with client.connect() as session:
    await session.talk("Hello, tell me about yourself.")

Use the event-driven API

If you prefer callbacks over async iterators:
from anam import AnamClient, AnamEvent

client = AnamClient(
    api_key="your-api-key",
    persona_id="your-persona-id",
)

@client.on(AnamEvent.CONNECTION_ESTABLISHED)
async def on_connected():
    print("Connected to persona")

@client.on(AnamEvent.MESSAGE_STREAM_EVENT_RECEIVED)
async def on_message(event):
    print(f"{event.role}: {event.content}")

await client.connect()

Next steps

GitHub Repository

Source code, full API reference, and examples

Cookbook: Python BYO LLM

Bring your own LLM with the Python SDK