Skip to main content
The LLM Actor handles conversations using a large language model (LLM) with tool calling. Unlike the Communication Actor which uses the Skillset Engine, the LLM Actor gives you direct control over the model, system prompt, and available tools. It connects downstream of a Channel Dispatcher and takes over the conversation — processing each user message through the LLM, calling tools when needed, and sending the response back through the channel.

When to use

  • Custom AI agents — build agents with specific system prompts and behaviors
  • Tool-calling conversations — give the AI access to web search, knowledge lookup, or custom workflow tools
  • Open-ended conversations — support chats, product Q&A, or general-purpose assistants
  • Quick setup — configure a conversational agent with just a model and a prompt (no skillset needed)
If you need structured data collection with field validation and form-like behavior, use Communication Actor with the Skillset Engine instead.

How it works

When triggered by a conversation_updated event from the Channel Dispatcher, the LLM Actor:
  1. Finds its upstream Channel Dispatcher by traversing the workflow graph
  2. Gets the conversation with full message history
  3. Calls the LLM with the system prompt, conversation history, and available tools
  4. Executes any tool calls the LLM makes (web search, custom tools, etc.)
  5. Sends the response back through the channel
  6. Ends the conversation if the LLM calls the end_conversation or transfer_to_human tool

Built-in tools

Every LLM Actor automatically has two built-in tools:
ToolPurpose
end_conversationCalled by the LLM when the conversation should end. Takes a reason parameter.
transfer_to_humanCalled when the user requests a human or the LLM can’t help further. Takes a reason parameter.

Exit conditions

Exit conditions tell the LLM when to end the conversation. They become part of the end_conversation tool description. For example:
  • “The user found the product they want”
  • “The user’s question has been fully answered”
  • “The user explicitly says goodbye”
Without exit conditions, the LLM uses its own judgment about when to call end_conversation.

Built-in skills

You can enable optional built-in capabilities:
SkillWhat it does
Web SearchSearch the web for current information using Tavily
Web ExtractExtract content from specific web pages using Tavily
These are toggled on/off in the LLM Actor settings.

Custom tools

Connect Custom Tool nodes to the LLM Actor’s tools handle to give the AI access to workflow-backed actions: When the LLM calls a custom tool, the workflow executes the connected node and returns the result to the LLM, which can then use it in its response.

Example: support agent with tools

Build a support agent that can look up orders and process refunds:
1

Set up the channel

Add a Channel Dispatcher with your preferred channel (e.g., Web Widget in responder mode).
2

Add an LLM Actor

Connect an LLM Actor to the conversation_updated handle. Configure:
  • Model: Choose your preferred LLM
  • System prompt: “You are a helpful customer support agent for Acme Corp. Help customers with order inquiries, returns, and product questions. Be friendly and concise.”
  • Exit conditions: “The customer’s issue is resolved”, “The customer has no more questions”
3

Connect tools

Add Custom Tool nodes connected to the tools handle:
  • “Look up order” — takes an order ID, returns order details
  • “Process refund” — takes an order ID and reason, initiates a refund
4

Handle escalation

Connect the transfer_to_human handle to a notification node (e.g., Slack message to the support team).
Build a product assistant that can search the web for current information: Configure the LLM Actor with:
  • System prompt: “You are a product assistant for Acme Corp. Answer questions about our products, pricing, and availability. Use web search to find current information when needed.”
  • Built-in skills: Enable Web Search
  • Exit conditions: “The user has found what they need”, “The user says goodbye”

Example: multi-step conversation with actor handoff

Use one LLM Actor for initial triage, then hand off to a specialized actor:

Settings

modelName
string
required
The LLM model to use (e.g., grok-4.1-fast, gpt-4o, claude-sonnet-4-20250514).
litellmProvider
string
required
The model provider route (e.g., openrouter/x-ai, openai, anthropic).
systemPrompt
string
required
Instructions that define the AI’s behavior, personality, and goals. Supports variable interpolation with {{variable_name}}.
exitConditions
array
List of conditions that tell the LLM when to end the conversation. Each condition is a plain-text description (e.g., “The user’s question has been fully answered”). These become part of the end_conversation tool description.
builtinSkills
object
Toggle built-in capabilities:
  • web_search — search the web for information
  • web_extract — extract content from web pages

Outputs

conversationId
string
The ID of the conversation this actor processed.
result
object
Result data from the actor execution.
success
boolean
Whether the actor completed successfully.

Events emitted

EventWhen it fires
conversationEndedThe LLM called the end_conversation tool
transfer_to_humanThe LLM called the transfer_to_human tool

Channel Dispatcher

The channel node that feeds conversations to this actor.

Communication Actor

Alternative actor using the Skillset Engine for structured data collection.

Send Message

Send a follow-up message after the actor finishes.

Clear Conversation

Reset conversation history between actor handoffs.