Skip to main content
All parameters inside the Agent class follow the same structure as the OpenAI Agents SDK Agent. However, there are a few advanced parameters that require more explanation.

Parallel Tool Calls

Controls whether tools execute in parallel or sequentially for a given agent.
from agency_swarm import Agent, ModelSettings

agent = Agent(
    name="MyAgent",
    instructions="...",
    model_settings=ModelSettings(parallel_tool_calls=False),
)
To force sequential execution for a specific tool regardless of that setting, set one_call_at_a_time = True in the tool’s ToolConfig. See Advanced Tool Configuration. If your files_folder ends with _vs_<vector_store_id>, Agency Swarm automatically associates files with that Vector Store and adds FileSearchTool to the Agent. The include_search_results behavior can be toggled via the Agent’s include_search_results flag.

Conversation starters cache

Conversation starters are the suggested prompts you see in the chat UI. Cache makes the first reply instant without calling the LLM.
from agency_swarm import Agent

agent = Agent(
    name="SupportAgent",
    instructions="You are helpful.",
    model="gpt-5-mini",
    conversation_starters=["Support: I need help with billing"],
    cache_conversation_starters=True,
)
In this example:
  • The UI shows the starter prompt “Support: I need help with billing”.
  • The FastAPI /get_metadata response exposes it as conversationStarters for UI rendering.
  • With cache_conversation_starters=True, picking that prompt can return a saved reply without calling the LLM.
Streaming the cached reply includes events for text, tool calls, reasoning, and handoffs.
If you change key agent settings (like instructions, tools, or model), cached starters are rebuilt.
Cache files live under AGENCY_SWARM_CHATS_DIR (defaults to .agency_swarm) in starter_cache/. In production, point AGENCY_SWARM_CHATS_DIR at persistent storage to keep instant replies across restarts.
See also: Agent Overview, FastAPI Integration.

Output Validation

Use output_guardrails on the Agent to validate outputs. See the detailed guide: Guardrails.

Few‑Shot Examples

You can include few‑shot examples in instructions as plain text or pass message history to get_response / get_response_stream.
from agency_swarm import Agent

agent = Agent(name="MyAgent", instructions="You are a helpful assistant.")

examples = [
    {"role": "user", "content": "Hi!"},
    {"role": "assistant", "content": "Hello!"},
]
new_message = {"role": "user", "content": "Can you help me write a short summary?"}

response = await agent.get_response(examples + [new_message])
See also: Few‑Shot Examples.