Agent Secret Store DocsSign up
📘 Guides

AutoGen Integration

Equip Microsoft AutoGen agents with vault-backed credentials — every API key fetched at runtime, every access logged, every agent token-scoped to only what it needs.

Why AutoGen agents need a vault

AutoGen agents are designed to work autonomously for extended periods, calling external APIs, searching the web, and publishing results. Each of these operations requires credentials.

Without a vault, credentials end up in environment variables shared across all agents, in system prompts (where they appear in LLM context windows), or hardcoded in tool definitions. Any of these approaches means:

🔓

No isolation

All agents see all credentials — a compromised agent leaks everything.

🌫️

No audit trail

You can't tell which agent used which credential or when.

♻️

No rotation

Rotating a key requires redeploying every agent that uses it.

Agent Secret Store solves all three: scoped tokens isolate each agent, the audit trail records every credential access, and key rotation is transparent to agents.

Installation

Shell
pip install agentsecretstore pyautogen

Setup

  1. 1

    Store your credentials in the vault

    Add your secrets at paths like production/openai/api-key, production/tavily/api-key, etc. Use the dashboard or the ass import .env CLI command to migrate from an existing .env file.

  2. 2

    Set ASS_AGENT_KEY in your environment

    Your orchestrator process needs the master agent key: export ASS_AGENT_KEY=ast_agent_xxx. AutoGen agents receive scoped tokens derived from this key — they never see the master key.

  3. 3

    Create a VaultTool factory

    Wrap the SDK in an AutoGen-compatible function using the make_vault_tool() factory pattern shown below. The factory binds a scoped token so each agent call uses only the permissions it was granted.

  4. 4

    Issue scoped tokens per agent or session

    Before spawning each agent, issue a scoped token with the minimum necessary scope. Pass the token value to make_vault_tool(). Tokens expire automatically — no cleanup needed.

VaultTool factory

AutoGen uses function annotations for tool descriptions. The factory pattern lets you create a bound tool function that carries the scoped token in its closure:

vault_tool.py
import asyncio
from typing import Annotated
import autogen
from agentsecretstore import AgentVault

def make_vault_tool(vault_token: str):
    """
    Factory that returns a vault retrieval function bound to a scoped token.
    The token — not the master key — is what the agent uses.
    """

    async def get_secret(
        path: Annotated[str, "Secret path in the vault, e.g. 'production/openai/api-key'"],
    ) -> str:
        """
        Retrieve a credential from Agent Secret Store.
        Call this before any operation that requires an API key or credential.
        Returns the secret value as a plain string.
        """
        async with AgentVault() as vault:
            secret = await vault.get_secret(path, token=vault_token)
        return secret.value

    return get_secret

Full agent example

Complete example with token issuance, LLM config from vault, and tool registration:

autogen_agent.py
import asyncio
import autogen
from agentsecretstore import AgentVault

async def run_research_agent():
    """AutoGen agent that fetches all credentials from the vault at runtime."""

    # Step 1: Orchestrator issues a scoped token for this agent run
    async with AgentVault() as vault:
        token = await vault.request_token(
            scope="secrets:read:production/*",
            ttl_seconds=1800,
            description="AutoGen research agent session",
        )

        # Also fetch the LLM key directly for the assistant config
        openai_key = await vault.get_secret("production/openai/api-key")

    # Step 2: Configure the LLM using the vault-fetched key
    llm_config = {
        "config_list": [
            {
                "model": "gpt-4o",
                "api_key": openai_key.value,  # from vault, not env var
            }
        ],
        "temperature": 0.1,
        "timeout": 120,
    }

    # Step 3: Create the vault retrieval function bound to the scoped token
    get_secret = make_vault_tool(vault_token=token.value)

    # Step 4: Register the vault tool with AutoGen
    user_proxy = autogen.UserProxyAgent(
        name="user_proxy",
        human_input_mode="NEVER",
        max_consecutive_auto_reply=10,
        is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
        code_execution_config={"use_docker": False},
    )

    assistant = autogen.AssistantAgent(
        name="research_assistant",
        llm_config=llm_config,
        system_message=(
            "You are an AI research assistant with access to a secure credential vault. "
            "When you need to call any external API, always use the get_secret tool first "
            "to retrieve the required credential. Never ask the user for API keys. "
            "Reply TERMINATE when the task is complete."
        ),
    )

    # Register the vault tool for both calling and execution
    autogen.register_function(
        get_secret,
        caller=assistant,
        executor=user_proxy,
        name="get_secret",
        description="Retrieve an API key or credential from the secure vault by path",
    )

    # Step 5: Run the agent
    await user_proxy.a_initiate_chat(
        assistant,
        message=(
            "Research the latest developments in multi-agent AI frameworks. "
            "Use the Tavily search API (key at 'production/tavily/api-key') "
            "to find recent papers and summarize the top 3 findings."
        ),
    )

asyncio.run(run_research_agent())

LLM key from vault

Notice that even the openai_key for llm_config is fetched from the vault — not from an environment variable. Your AutoGen agent has zero hardcoded credentials anywhere.

Multi-agent pattern

In a multi-agent GroupChat, each agent should get its own scoped token. The researcher can only reach search credentials; the writer can only reach publishing credentials — a compromised agent cannot pivot to steal another's secrets:

multi_agent.py
import asyncio
import autogen
from agentsecretstore import AgentVault

async def run_multi_agent_pipeline():
    """
    Multi-agent AutoGen pipeline where each agent has its own scoped token.
    The researcher can only read search credentials.
    The writer can only read publishing credentials.
    Neither can access the other's secrets.
    """

    async with AgentVault() as vault:
        # Fetch shared LLM key
        openai_key = await vault.get_secret("production/openai/api-key")

        # Issue separate scoped tokens per agent — least privilege
        researcher_token = await vault.request_token(
            scope="secrets:read:production/search/*",
            ttl_seconds=3600,
            description="AutoGen researcher agent",
        )
        writer_token = await vault.request_token(
            scope="secrets:read:production/publishing/*",
            ttl_seconds=3600,
            description="AutoGen writer agent",
        )
        critic_token = await vault.request_token(
            scope="secrets:read:production/quality/*",
            ttl_seconds=3600,
            description="AutoGen critic agent",
        )

    llm_config = {
        "config_list": [{"model": "gpt-4o", "api_key": openai_key.value}],
    }

    # Each agent gets its own vault tool bound to its scoped token
    researcher_vault = make_vault_tool(vault_token=researcher_token.value)
    writer_vault = make_vault_tool(vault_token=writer_token.value)
    critic_vault = make_vault_tool(vault_token=critic_token.value)

    # Researcher: can only access production/search/* secrets
    researcher = autogen.AssistantAgent(
        name="researcher",
        llm_config=llm_config,
        system_message=(
            "You research topics using web search APIs. "
            "Use get_secret to retrieve 'production/search/tavily-key' before searching."
        ),
    )

    # Writer: can only access production/publishing/* secrets
    writer = autogen.AssistantAgent(
        name="writer",
        llm_config=llm_config,
        system_message=(
            "You write and publish content. "
            "Use get_secret to retrieve 'production/publishing/wordpress-key' before posting."
        ),
    )

    # Critic: can only access quality service credentials
    critic = autogen.AssistantAgent(
        name="critic",
        llm_config=llm_config,
        system_message=(
            "You review content quality. "
            "Use get_secret for 'production/quality/grammarly-key' to check writing."
        ),
    )

    user_proxy = autogen.UserProxyAgent(
        name="user_proxy",
        human_input_mode="NEVER",
        max_consecutive_auto_reply=20,
        code_execution_config={"use_docker": False},
    )

    # Register scoped vault tools per agent — researcher can't use writer's tool
    autogen.register_function(
        researcher_vault, caller=researcher, executor=user_proxy,
        name="get_secret", description="Retrieve a search API credential",
    )
    autogen.register_function(
        writer_vault, caller=writer, executor=user_proxy,
        name="get_secret", description="Retrieve a publishing credential",
    )
    autogen.register_function(
        critic_vault, caller=critic, executor=user_proxy,
        name="get_secret", description="Retrieve a quality service credential",
    )

    # Run the group chat
    groupchat = autogen.GroupChat(
        agents=[user_proxy, researcher, writer, critic],
        messages=[],
        max_round=12,
    )
    manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

    await user_proxy.a_initiate_chat(
        manager,
        message="Research and write a blog post about AutoGen multi-agent patterns.",
    )

asyncio.run(run_multi_agent_pipeline())

Scope isolation is server-enforced

Even if the researcher agent were to call get_secret('production/publishing/wordpress-key'), the vault would return HTTP 403. Scope restrictions are not trust-based — they are cryptographically enforced at the server.

Audit-enriched tool

For compliance-sensitive deployments, pass the agent name and a reason to the vault call. This metadata appears in the audit log, making it easy to reconstruct exactly which agent fetched which credential and why:

audited_vault_tool.py
import asyncio
from typing import Annotated
import autogen
from agentsecretstore import AgentVault

# Advanced pattern: tool that verifies scope before returning
def make_audited_vault_tool(vault_token: str, agent_name: str):
    """Enhanced vault tool that logs the requesting agent name."""

    async def get_secret(
        path: Annotated[str, "Vault secret path"],
        reason: Annotated[str, "Why this secret is needed (recorded in audit)"] = "",
    ) -> str:
        """Retrieve a credential. Provide a reason for audit clarity."""
        async with AgentVault() as vault:
            secret = await vault.get_secret(
                path,
                token=vault_token,
                # Extra metadata recorded in the audit log
                metadata={"agent_name": agent_name, "reason": reason},
            )
        return secret.value

    return get_secret

LangChain Integration

Vault-backed credentials for LangChain agents and tools.

CrewAI Integration

Use the vault with CrewAI crews and role-based agents.

Scoped Tokens

Deep dive into scope format, TTLs, and single-use tokens.

Python SDK Reference

Complete SDK reference for all vault operations.