Skip to main content

Ask AI Node

The LLM node lets you ask AI to help with text. It’s like having a smart assistant that can summarize long emails, categorize support tickets, write responses, extract information from documents, and much more. LLM stands for “Large Language Model” - it’s the same AI technology behind ChatGPT and Claude.

When to Use

  • Summarizing - Turn a long document into bullet points
  • Categorizing - Sort emails into “Sales”, “Support”, “Spam”, etc.
  • Extracting - Pull out names, dates, and numbers from messy text
  • Writing - Generate email replies, reports, or social posts
  • Translating - Convert text to another language
  • Analyzing - Detect if a message is positive or negative, urgent or not
  • Reformatting - Turn a paragraph into a list, or vice versa
Use AI when you need to understand meaning. For simple tasks like removing spaces or splitting text, use Execute Code instead - it’s faster and costs less.

Example: Email Categorizer

Automatically categorize incoming support emails:
1

Set up the event trigger

Add an Event from App node for Gmail to receive incoming emails.
2

Configure the LLM

Add an LLM node with:System prompt:
You are an email classifier for a software company's support team.
Categorize emails into exactly one of these categories:
- BUG_REPORT
- FEATURE_REQUEST
- BILLING
- ACCOUNT_ACCESS
- GENERAL_INQUIRY

Respond with only the category name, nothing else.
User message:
Categorize this email:

From: {{event_from_app_1.email.from}}
Subject: {{event_from_app_1.email.subject}}
Body: {{event_from_app_1.email.body}}
3

Route based on category

Add a Switch node using {{llm_1.response}} to route to different handling workflows.

Example: Meeting Summary Generator

Create structured meeting notes from transcripts: System prompt:
You are an expert at summarizing business meetings. Create clear,
actionable meeting summaries.
User message:
Summarize this meeting transcript into:
1. Key decisions made
2. Action items with owners
3. Open questions
4. Next steps

Transcript:
{{parse_file_1.content}}
Output usage:
# Send to Slack
{{llm_1.response}}

# Store for records
{{set_variable_1.value}}

Example: Data Extraction

Extract structured data from unstructured text: System prompt:
You are a data extraction assistant. Extract information precisely
as requested and return it in valid JSON format.
User message:
Extract the following from this invoice:
- Vendor name
- Invoice number
- Date
- Line items (description, quantity, unit price)
- Total amount

Return as JSON.

Invoice text:
{{parse_file_1.content}}
Using the extracted data:
// In an Execute Code node, parse the JSON
const invoice = JSON.parse(input.llm_1.response);
return {
  vendorName: invoice.vendor_name,
  total: invoice.total_amount
};

Tips for Getting Good Results

Be Specific About What You Want

Good: "Summarize in 3 bullet points, max 15 words each"
Bad: "Summarize this"
The more specific you are, the better the result.

Give Background Information

Good: "You're reading customer feedback for our project management software"
Bad: "Analyze this feedback"
Context helps the AI understand what it’s working with.

Tell It Exactly How to Respond

Good: "Respond with only 'APPROVE' or 'REJECT', nothing else"
Bad: "Tell me if this should be approved"
If you need a specific format, say so clearly.

Show Examples

Good:
"Categorize as positive, negative, or neutral.

Examples:
- 'Love this product!' → positive
- 'Terrible experience' → negative
- 'It works fine' → neutral

Now categorize: {{input}}"
Examples help the AI understand exactly what you’re looking for.

Structured Output

For reliable parsing, instruct the model to respond in JSON: System prompt:
Always respond in valid JSON format. Do not include any text
outside the JSON object.
User message:
Analyze this customer review and respond with:
{
  "sentiment": "positive" | "negative" | "neutral",
  "topics": ["array", "of", "topics"],
  "urgency": 1-5,
  "summary": "one sentence summary"
}

Review: {{input}}
Use temperature 0 when you need consistent, structured output. Higher temperatures can cause format variations.

Which Model Should I Choose?

What You’re DoingBest Choice
Complex analysis, hard questionsLargest available model (most capable, higher cost)
Most everyday tasksMid-tier model (good balance of speed and capability)
Simple tasks, lots of themSmallest/fastest model (lowest cost)
Very long documentsModels with larger context windows
Model names and capabilities change frequently. The model selector in the UI shows current options with their capabilities.

Remembering Previous Conversations

By default, each LLM node starts fresh with no memory of previous AI calls. But you can enable memory:
LLM 1: "What's the capital of France?"
AI: "Paris"

LLM 2: "What's its population?" (with memory enabled)
AI: "Paris has about 2.1 million people..."
Without memory, the second question would fail because the AI wouldn’t know what “its” refers to.
Memory makes each call use more resources. If your workflow has many AI steps, set a limit on how much it remembers.

Saving Money on AI Costs

  • Use simpler models for simple tasks - Don’t use GPT-4 when GPT-3.5 would work fine
  • Keep responses short when possible - If you only need a yes/no answer, don’t allow long responses
  • Save results to reuse later - Use Set Variable so you don’t have to ask the same question twice
  • Skip AI when you don’t need it - Use Condition nodes to avoid unnecessary AI calls

Tips

Use the “Fill by AI” feature when configuring prompts. It can help you write better prompts based on your workflow context.
Chain multiple LLM nodes for complex tasks: one to analyze, one to decide, one to generate. This improves reliability and makes debugging easier.

Settings

name
string
default:"LLM"
What to call this node (shown on the canvas).
key
string
default:"llm_1"
A short code to reference this node’s response.
provider
string
required
Which AI company to use:
  • OpenAI - Makes GPT-4 and ChatGPT
  • Anthropic - Makes Claude
  • Google - Makes Gemini
model
string
required
Which specific AI model to use. Newer/larger models are smarter but cost more.
systemPrompt
string
Background instructions for the AI. This sets the context and tells the AI how to behave.
userMessage
string
required
What you want the AI to do. This is your main question or request. You can include data from previous nodes using {{node_name.value}}.
temperature
number
default:"0.7"
How creative vs. consistent the AI should be. 0 = same answer every time. 1 = more varied and creative.
maxTokens
number
default:"1024"
How long the response can be. A token is roughly 3-4 characters for English text (varies by language and content).
maxRetries
number
default:"3"
How many times to try again if something goes wrong.

Conversation Memory

chatMemoryEnabled
boolean
default:"false"
Enable to maintain conversation context across multiple LLM calls in the same workflow execution.
maxMessagesToKeep
number
default:"10"
When memory is enabled, how many previous messages to include as context.

Outputs

response
string
The model’s text response.
usage
object
Token usage statistics:
  • promptTokens - Tokens in your input
  • completionTokens - Tokens in the response
  • totalTokens - Combined total
finishReason
string
Why the model stopped generating:
  • stop - Natural completion
  • length - Hit max tokens limit