Skip to content

LangGraph Supervisor Multi-Agent Cheatsheet

LangGraph 1.x 版本 | Supervisor 多智能体模式速查表

核心概念

Supervisor 模式是一种经典的多智能体架构,由一个 Supervisor Agent 管理多个 Worker Agents,统一调度任务分发。


                   USER REQUEST
                        |
                        v

                  Supervisor
                        |
                        v (routing)
                        |
         +--------------+--------------+
         |              |              |
         v              v              v
     Worker1       Worker2       Worker3
         |              |              |
         +--------------+--------------+
                        | (report back)
                        v
                   Supervisor

核心流程:

  • Supervisor 负责分析任务,决定分配给哪个 Worker
  • Workers 执行具体任务后,汇报结果给 Supervisor
  • Supervisor 根据结果决定下一步操作或结束 (END)

三种实现方式对比

方式灵活性复杂度推荐场景
langgraph-supervisor快速原型
手动实现 (Handoff Tools)生产环境
Task Delegation + Send()复杂场景

方式一:使用官方库(最简单)

安装

bash
pip install langgraph langgraph-supervisor langchain-openai
python
from langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agent
from langchain.chat_models import init_chat_model

# 1. 创建 Worker Agents
research_agent = create_react_agent(
    model="openai:gpt-4o",
    tools=[web_search_tool],
    prompt="You are a research agent...",
    name="research_agent",
)

math_agent = create_react_agent(
    model="openai:gpt-4o",
    tools=[add, multiply, divide],
    prompt="You are a math agent...",
    name="math_agent",
)

# 2. 一行代码创建 Supervisor
supervisor = create_supervisor(
    model=init_chat_model("openai:gpt-4o"),
    agents=[research_agent, math_agent],
    prompt=(
        "You are a supervisor managing two agents:\n"
        "- research_agent: for research tasks\n"
        "- math_agent: for math tasks\n"
        "Assign work to one agent at a time."
    ),
    add_handoff_back_messages=True,  # Worker 完成后自动汇报
    output_mode="full_history",       # 保留完整对话历史
).compile()

# 3. 运行
result = supervisor.invoke({"messages": [{"role": "user", "content": "..."}]})

方式二:手动实现(推荐生产环境)

导入依赖

python
from typing import Annotated
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.prebuilt import InjectedState, create_react_agent
from langgraph.graph import StateGraph, START, END, MessagesState
from langgraph.types import Command

Step 1: 创建 Handoff Tool

Handoff 是 Supervisor 将任务分配给指定 Worker 的工具

python
def create_handoff_tool(*, agent_name: str, description: str | None = None):
    """创建一个用于将任务分配给指定 Agent 的工具"""
    name = f"transfer_to_{agent_name}"
    description = description or f"Transfer to {agent_name} for help."

    @tool(name, description=description)
    def handoff_tool(
        state: Annotated[MessagesState, InjectedState],
        tool_call_id: Annotated[str, InjectedToolCallId],
    ) -> Command:
        tool_message = {
            "role": "tool",
            "content": f"Successfully transferred to {agent_name}",
            "name": name,
            "tool_call_id": tool_call_id,
        }
        return Command(
            goto=agent_name,                                    # 跳转目标
            update={**state, "messages": state["messages"] + [tool_message]},
            graph=Command.PARENT,                               # 在父图级别跳转
        )

    return handoff_tool

Step 2: 创建 Supervisor Agent

python
# 创建 handoff tools
assign_to_research = create_handoff_tool(
    agent_name="research_agent",
    description="Assign research tasks to this agent."
)
assign_to_math = create_handoff_tool(
    agent_name="math_agent",
    description="Assign math tasks to this agent."
)

# 创建 Supervisor
supervisor_agent = create_react_agent(
    model="openai:gpt-4o",
    tools=[assign_to_research, assign_to_math],
    prompt=(
        "You are a supervisor managing agents.\n"
        "Assign work to one agent at a time.\n"
        "Do not do any work yourself."
    ),
    name="supervisor",
)

Step 3: 构建 Multi-Agent Graph

python
supervisor_graph = (
    StateGraph(MessagesState)
    # 关键:通过 destinations 参数声明可能的跳转目标
    .add_node(supervisor_agent, destinations=("research_agent", "math_agent", END))
    .add_node(research_agent)
    .add_node(math_agent)
    # 边的定义
    .add_edge(START, "supervisor")
    # Workers 完成后返回 Supervisor
    .add_edge("research_agent", "supervisor")
    .add_edge("math_agent", "supervisor")
    .compile()
)

方式三:Task Delegation 高级模式

当 Supervisor 不想把完整对话历史传给 Worker 时,使用这种模式更高效

使用 Send() 实现

python
from langgraph.types import Send

def create_task_handoff_tool(*, agent_name: str, description: str):
    name = f"transfer_to_{agent_name}"

    @tool(name, description=description)
    def handoff_tool(
        # Supervisor LLM 生成的任务描述
        task_description: Annotated[
            str,
            "Description of what the agent should do, with all relevant context."
        ],
        state: Annotated[MessagesState, InjectedState],
    ) -> Command:
        # 只传递任务描述,而不是完整历史
        task_message = {"role": "user", "content": task_description}
        agent_input = {**state, "messages": [task_message]}

        return Command(
            goto=[Send(agent_name, agent_input)],  # 使用 Send 发送新输入
            graph=Command.PARENT,
        )

    return handoff_tool

优势:

  • Worker 只接收到精简的任务描述,而不是完整历史
  • Supervisor 可以控制每个任务传递什么信息
  • 节省 token 消耗

复杂场景:嵌套 Supervisor Teams

适用于复杂业务场景,需要多层级的团队协作


             Top Supervisor
                   |
                   v
        +----------+----------+
        |                     |
        v                     v
 Research Team         Writing Team
  (sub-graph)           (sub-graph)

Step 1: 创建 Sub-Supervisor 节点

python
from typing import Literal
from typing_extensions import TypedDict

def make_supervisor_node(llm, members: list[str]):
    """创建一个 supervisor 节点"""
    options = ["FINISH"] + members

    class Router(TypedDict):
        """选择下一个 worker 或结束 (FINISH)"""
        next: Literal[*options]

    system_prompt = (
        f"You are a supervisor managing workers: {members}.\n"
        "Respond with the worker to act next. When finished, respond with FINISH."
    )

    def supervisor_node(state) -> Command[Literal[*members, "__end__"]]:
        messages = [{"role": "system", "content": system_prompt}] + state["messages"]
        response = llm.with_structured_output(Router).invoke(messages)
        goto = response["next"]
        if goto == "FINISH":
            goto = END
        return Command(goto=goto, update={"next": goto})

    return supervisor_node

Step 2: 构建 Team 子图

python
from langgraph.prebuilt import create_react_agent

llm = ChatOpenAI(model="gpt-4o")

# 创建 worker agents
search_agent = create_react_agent(llm, tools=[tavily_tool])
scraper_agent = create_react_agent(llm, tools=[scrape_tool])

# 包装 worker 节点,完成后返回 supervisor
def search_node(state) -> Command[Literal["supervisor"]]:
    result = search_agent.invoke(state)
    return Command(
        update={"messages": [HumanMessage(content=result["messages"][-1].content, name="search")]},
        goto="supervisor",
    )

# 构建 Research Team 子图
research_supervisor = make_supervisor_node(llm, ["search", "scraper"])

research_graph = (
    StateGraph(State)
    .add_node("supervisor", research_supervisor)
    .add_node("search", search_node)
    .add_node("scraper", scraper_node)
    .add_edge(START, "supervisor")
    .compile()
)

Step 3: 顶层 Supervisor

python
def call_research_team(state) -> Command[Literal["supervisor"]]:
    response = research_graph.invoke({"messages": state["messages"][-1]})
    return Command(
        update={"messages": [HumanMessage(content=response["messages"][-1].content, name="research_team")]},
        goto="supervisor",
    )

# 顶层图
top_supervisor = make_supervisor_node(llm, ["research_team", "writing_team"])

super_graph = (
    StateGraph(State)
    .add_node("supervisor", top_supervisor)
    .add_node("research_team", call_research_team)
    .add_node("writing_team", call_writing_team)
    .add_edge(START, "supervisor")
    .compile()
)

核心 API 速查

Command 对象

python
from langgraph.types import Command

Command(
    goto="node_name",           # 跳转到指定节点
    update={"key": "value"},    # 更新 state
    graph=Command.PARENT,       # 在父图级别跳转(用于子图)
)

Send 函数

python
from langgraph.types import Send

Command(
    goto=[Send("agent_name", {"messages": [...]})]  # 发送自定义输入
)

条件边路由

python
builder.add_conditional_edges(
    "supervisor",
    routing_func,               # 返回下一个节点名称
    {
        "worker1": "worker1_node",
        "worker2": "worker2_node",
        "__end__": END,
    }
)

结构化输出路由

python
class Router(TypedDict):
    next: Literal["agent1", "agent2", "FINISH"]

response = llm.with_structured_output(Router).invoke(messages)

最佳实践

1. Supervisor Prompt 设计

python
SUPERVISOR_PROMPT = """You are a supervisor managing these agents:
- agent_a: handles X tasks
- agent_b: handles Y tasks

RULES:
1. Assign work to ONE agent at a time
2. Do NOT do any work yourself
3. After receiving results, decide next action or finish
"""

2. Worker Prompt 设计

python
WORKER_PROMPT = """You are a specialized agent for {domain}.

INSTRUCTIONS:
- Only handle {domain}-related tasks
- After completing your task, respond with results directly
- Do NOT include unnecessary text
"""

3. 精简 Worker 输出

每个 Worker 只返回最后一条消息,避免重复

python
def call_worker(state):
    response = worker_agent.invoke(state)
    # 只返回最后一条消息,避免重复整个历史
    return {"messages": response["messages"][-1]}

4. 正确处理结束条件

python
# 方式 A: 使用 type 标记
if state.get("type") == "__end__":
    return Command(goto=END)

# 方式 B: 使用结构化输出
if response["next"] == "FINISH":
    goto = END

完整示例代码

python
from typing import Annotated, Literal
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END, MessagesState
from langgraph.prebuilt import create_react_agent
from langgraph.types import Command
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.prebuilt import InjectedState

llm = ChatOpenAI(model="gpt-4o")

# ========== 1. 定义工具 ==========
@tool
def search(query: str) -> str:
    """Search the web."""
    return f"Results for: {query}"

@tool
def calculate(expression: str) -> str:
    """Calculate math expression."""
    return str(eval(expression))

# ========== 2. 创建 Worker Agents ==========
researcher = create_react_agent(llm, [search], prompt="You are a researcher.", name="researcher")
calculator = create_react_agent(llm, [calculate], prompt="You are a calculator.", name="calculator")

# ========== 3. 创建 Handoff Tools ==========
def make_handoff(agent_name: str):
    @tool(f"transfer_to_{agent_name}")
    def handoff(
        state: Annotated[MessagesState, InjectedState],
        tool_call_id: Annotated[str, InjectedToolCallId],
    ) -> Command:
        return Command(
            goto=agent_name,
            update={**state, "messages": state["messages"] + [{
                "role": "tool", "content": f"Transferred to {agent_name}",
                "name": f"transfer_to_{agent_name}", "tool_call_id": tool_call_id
            }]},
            graph=Command.PARENT,
        )
    return handoff

# ========== 4. 创建 Supervisor ==========
supervisor = create_react_agent(
    llm,
    tools=[make_handoff("researcher"), make_handoff("calculator")],
    prompt="You are a supervisor. Assign tasks to agents. Don't work yourself.",
    name="supervisor",
)

# ========== 5. 构建图 ==========
graph = (
    StateGraph(MessagesState)
    .add_node(supervisor, destinations=("researcher", "calculator", END))
    .add_node(researcher)
    .add_node(calculator)
    .add_edge(START, "supervisor")
    .add_edge("researcher", "supervisor")
    .add_edge("calculator", "supervisor")
    .compile()
)

# ========== 6. 运行 ==========
for chunk in graph.stream({"messages": [{"role": "user", "content": "Search for GDP and calculate 100/4"}]}):
    print(chunk)

常见问题

Q: Worker 如何知道任务完成了?

A: Worker 执行完后自动将结果返回给 Supervisor,Supervisor 根据结果决定下一步

Q: 如何避免无限循环?

A: 设置 recursion_limit

python
graph.invoke(input, {"recursion_limit": 50})

Q: Supervisor vs Network 架构如何选择?

  • Supervisor: 集中式管理,适合任务有明确分工的场景
  • Network: agents 之间可以直接通信,适合协作复杂的场景

何时使用多智能体系统

单 Agent 的问题信号

当你的单 Agent 出现以下问题时,考虑转向多智能体:

问题描述
工具过多Agent 对使用哪个工具/何时使用感到困惑
上下文过大上下文窗口包含过多工具信息
错误增多由于责任过于宽泛,开始产生次优或错误的结果

多智能体优势

  • Scalability: 添加新 agent 不会过载单个 agent
  • Specialization: 每个 agent 专注特定任务更高效
  • Control: 显式控制 agent 间通信方式
  • Fault Tolerance: 单个 agent 失败不影响整体系统

Supervisor 两种变体

变体 1: 基础 Supervisor (Command 路由)

python
def supervisor(state) -> Command[Literal["agent_1", "agent_2", END]]:
    response = model.invoke(...)
    return Command(goto=response["next_agent"])

def agent_1(state) -> Command[Literal["supervisor"]]:
    response = model.invoke(...)
    return Command(goto="supervisor", update={"messages": [response]})

特点: Supervisor 使用 Command 显式路由到下一个 agent

变体 2: Tool-Calling Supervisor

python
from langgraph.prebuilt import InjectedState, create_react_agent

def agent_1(state: Annotated[dict, InjectedState]):
    response = model.invoke(...)
    return response.content  # 返回字符串,自动转为 ToolMessage

tools = [agent_1, agent_2]
supervisor = create_react_agent(model, tools)  # ReAct 风格

特点: 将 agents 定义为 tools,利用 LLM 的 tool-calling 能力


Agent 间通信方式

通过 Graph State vs Tool Calls

方式描述
Graph StateAgent 直接将完整 state 传递给下一个 Agent
Tool Calls通过 LLM tool-calling 决定传递什么信息

共享消息列表策略

策略描述适用场景
共享完整历史agents 共享全部思考过程agent 数量少,需要完整上下文
只共享最终结果agents 有私有 scratchpadagent 数量多,复杂度高

Handoff 模式

Handoff 是多智能体交互的核心模式:

  • destination: 目标 agent (要跳转到的 node)
  • payload: 传递给目标 agent 的信息 (state update)

Handoff 作为 Tool

python
def transfer_to_bob(state):
    """Transfer to bob."""
    return Command(
        goto="bob",
        update={"my_state_key": "my_state_value"},
        graph=Command.PARENT,  # 在子图中跳转到父图的其他 node
    )

子图间的 Handoff

python
def some_node_inside_alice(state):
    return Command(
        goto="bob",
        update={"my_state_key": "my_state_value"},
        graph=Command.PARENT,  # 关键:指定父图
    )

参考资料


LangGraph 1.x | Supervisor Multi-Agent Cheatsheet v1.1


Supervisor vs Swarm 关键对比

特性SupervisorSwarm
控制方式集中式:supervisor 管理所有 agent 交互去中心化:agents 直接相互交互
用户交互主要通过 supervisor 进行可与任意 agent 直接交互
执行流程总是返回给 supervisor每个 agent 自主决定下一个 handoff
起点Supervisor最后活跃的 agent 或默认 agent
通信总是通过 supervisor 中转Agent 之间直接通信
最佳场景有核心流程需要遵循时复杂多样的请求需要跨专家协作时

Message History 管理

output_mode 选项

python
# 保留完整消息历史(默认)
supervisor = create_supervisor(
    agents=[agent1, agent2],
    output_mode="full_history"  # 包含所有中间步骤
)

# 只保留最终响应
supervisor = create_supervisor(
    agents=[agent1, agent2],
    output_mode="last_message"  # 只保留最后一条消息
)

选择建议:

  • full_history: 需要调试或追踪完整执行过程时
  • last_message: 生产环境,只关心最终结果,节省 token

Message Forwarding 技巧

直接转发 Worker 响应到输出(节省 Supervisor 的 token 消耗):

python
from langgraph_supervisor.handoff import create_forward_message_tool

# 创建转发工具
forwarding_tool = create_forward_message_tool("supervisor")

workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    tools=[forwarding_tool]  # Supervisor 可直接转发 Worker 响应
)

优势: 避免 Supervisor 重复解释 Worker 的回答,减少 token 消耗和潜在的信息失真


运行模式

同步 vs 异步调用

python
# 同步调用 - 等待完成
response = agent.invoke({"messages": [...]})

# 异步调用 - 非阻塞
response = await agent.ainvoke({"messages": [...]})

# 同步流式
for chunk in agent.stream({"messages": [...]}, stream_mode="updates"):
    print(chunk)

# 异步流式
async for chunk in agent.astream({"messages": [...]}, stream_mode="updates"):
    print(chunk)

最大迭代次数控制

python
from langgraph.errors import GraphRecursionError

max_iterations = 3
recursion_limit = 2 * max_iterations + 1  # 公式:2n+1

try:
    response = agent.invoke(
        {"messages": [...]},
        {"recursion_limit": recursion_limit}
    )
except GraphRecursionError:
    print("Agent stopped due to max iterations.")

多智能体系统常见失败模式

三大类失败

类别常见问题
规范与系统设计任务定义不清、角色规范混乱、会话历史丢失、终止条件不明确
Agent 间协调会话重置、缺乏澄清、任务偏离、忽略其他 agent 输入、推理-行动不匹配
任务验证与终止过早终止、验证不完整或错误

应对策略

  1. 明确 Agent 角色规范 - 清晰定义每个 agent 的职责边界
  2. 增强编排策略 - 使用 MCP/A2A 等标准协议
  3. 设置合理的 recursion_limit - 防止无限循环
  4. 添加验证步骤 - 在关键节点验证任务完成质量

构建可扩展 Agent 的 7 步路线图

步骤描述建议
1. 选择模型选择擅长推理的 LLM从 Llama/Claude/Mistral 等开始
2. 设计推理流程定义 agent 如何处理任务使用 ReAct 或 Plan-then-Execute
3. 建立操作准则设置交互规则定义响应格式、工具使用时机
4. 添加记忆弥补 LLM 缺乏长期记忆使用 MemGPT/ZepAI
5. 集成工具和 API让 agent 能执行实际操作使用 MCP 标准化工具集成
6. 分配明确目标提供具体任务✅ "总结用户反馈" ❌ "有帮助"
7. 扩展为多 Agent 团队创建专业化协作 agent明确分工,专注各自领域

Customizing Handoff Tools 进阶

自定义 handoff 工具名称和前缀

python
from langgraph_supervisor import create_handoff_tool

# 自定义名称
workflow = create_supervisor(
    [research_agent, math_agent],
    tools=[
        create_handoff_tool(agent_name="math_expert", name="assign_to_math"),
        create_handoff_tool(agent_name="research_expert", name="assign_to_research"),
    ],
    model=model,
)

# 自定义前缀
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    handoff_tool_prefix="delegate_to"  # 工具名: delegate_to_xxx
)

# 禁用 handoff 消息(更简洁的历史)
workflow = create_supervisor(
    [research_agent, math_agent],
    model=model,
    add_handoff_messages=False
)

LangGraph 1.x | Supervisor Multi-Agent Cheatsheet v1.2 - 新增 Vipra Singh 文章精华

基于 MIT 许可证发布。内容版权归作者所有。