Ask AI Node
The LLM node lets you ask AI to help with text. It’s like having a smart assistant that can summarize long emails, categorize support tickets, write responses, extract information from documents, and much more. LLM stands for “Large Language Model” - it’s the same AI technology behind ChatGPT and Claude.When to Use
- Summarizing - Turn a long document into bullet points
- Categorizing - Sort emails into “Sales”, “Support”, “Spam”, etc.
- Extracting - Pull out names, dates, and numbers from messy text
- Writing - Generate email replies, reports, or social posts
- Translating - Convert text to another language
- Analyzing - Detect if a message is positive or negative, urgent or not
- Reformatting - Turn a paragraph into a list, or vice versa
Example: Email Categorizer
Automatically categorize incoming support emails:Example: Meeting Summary Generator
Create structured meeting notes from transcripts: System prompt:Example: Data Extraction
Extract structured data from unstructured text: System prompt:Tips for Getting Good Results
Be Specific About What You Want
Give Background Information
Tell It Exactly How to Respond
Show Examples
Structured Output
For reliable parsing, instruct the model to respond in JSON: System prompt:Which Model Should I Choose?
| What You’re Doing | Best Choice |
|---|---|
| Complex analysis, hard questions | Largest available model (most capable, higher cost) |
| Most everyday tasks | Mid-tier model (good balance of speed and capability) |
| Simple tasks, lots of them | Smallest/fastest model (lowest cost) |
| Very long documents | Models with larger context windows |
Model names and capabilities change frequently. The model selector in the UI shows current options with their capabilities.
Remembering Previous Conversations
By default, each LLM node starts fresh with no memory of previous AI calls. But you can enable memory:Saving Money on AI Costs
- Use simpler models for simple tasks - Don’t use GPT-4 when GPT-3.5 would work fine
- Keep responses short when possible - If you only need a yes/no answer, don’t allow long responses
- Save results to reuse later - Use Set Variable so you don’t have to ask the same question twice
- Skip AI when you don’t need it - Use Condition nodes to avoid unnecessary AI calls
Tips
Settings
What to call this node (shown on the canvas).
A short code to reference this node’s response.
Which AI company to use:
- OpenAI - Makes GPT-4 and ChatGPT
- Anthropic - Makes Claude
- Google - Makes Gemini
Which specific AI model to use. Newer/larger models are smarter but cost more.
Background instructions for the AI. This sets the context and tells the AI how to behave.
What you want the AI to do. This is your main question or request. You can include data from previous nodes using
{{node_name.value}}.How creative vs. consistent the AI should be. 0 = same answer every time. 1 = more varied and creative.
How long the response can be. A token is roughly 3-4 characters for English text (varies by language and content).
How many times to try again if something goes wrong.
Conversation Memory
Enable to maintain conversation context across multiple LLM calls in the same workflow execution.
When memory is enabled, how many previous messages to include as context.
Outputs
The model’s text response.
Token usage statistics:
promptTokens- Tokens in your inputcompletionTokens- Tokens in the responsetotalTokens- Combined total
Why the model stopped generating:
stop- Natural completionlength- Hit max tokens limit
