Generate Completion
Generates text completions using a local Ollama AI model. This node is ideal for single-prompt text generation tasks such as content creation, summarization, translation, and general text processing.
info
For conversational AI with context and message history, use Generate Chat Completion instead.
Common Properties
- Name - The custom name of the node.
- Color - The custom color of the node.
- Delay Before (sec) - Waits in seconds before executing the node.
- Delay After (sec) - Waits in seconds after executing node.
- Continue On Error - Automation will continue regardless of any error. The default value is false.
info
If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.
Inputs
- Client ID - The Client ID from the Connect node. Optional if Host URL is provided.
- Model - The name of the Ollama model to use (e.g.,
llama3,mistral,codellama,gemma). - Prompt - The text prompt to send to the model. This is the instruction or question you want the AI to respond to.
Output
- Response - The generated text response from the AI model.
Options
- Options - A JSON object containing model parameters to control generation behavior:
temperature(0.0-2.0) - Controls randomness. Lower values (0.1-0.5) make output more focused and deterministic. Higher values (0.8-2.0) make output more creative and varied. Default: 0.8top_p(0.0-1.0) - Nucleus sampling. Controls diversity. Lower values make output more focused. Default: 0.9top_k(integer) - Limits vocabulary to top K tokens. Lower values make output more focused.num_predict(integer) - Maximum number of tokens to generate. Default: 128, -1 for unlimitedrepeat_penalty(0.0-2.0) - Penalty for repeating tokens. Higher values reduce repetition. Default: 1.1seed(integer) - Random seed for reproducible outputsnum_ctx(integer) - Context window size in tokens. Default: 2048
- Host URL - Ollama server URL (optional). Use this instead of Client ID for direct connection. Example:
http://localhost:11434
How It Works
The Generate Completion node:
- Connects to the Ollama server (via Client ID or Host URL)
- Sends your prompt to the specified model
- Receives the generated text in a streaming fashion
- Concatenates all response chunks
- Returns the complete generated text
Usage Examples
Example 1: Simple Text Generation
Inputs:
- Model: "llama3"
- Prompt: "Write a professional email subject line for a project update"
Output:
- Response: "Project Update: Q4 Milestones Achieved and Next Steps"
Example 2: Content Summarization
Inputs:
- Model: "mistral"
- Prompt: "Summarize this text in 3 bullet points: [long article text here]"
Output:
- Response: "
• Main point about topic A
• Key insight about topic B
• Conclusion about topic C
"
Example 3: Data Extraction
Inputs:
- Model: "llama3"
- Prompt: "Extract the email address from this text: 'Contact John at john.doe@example.com for more info'"
Output:
- Response: "john.doe@example.com"
Example 4: Code Generation
Inputs:
- Model: "codellama"
- Prompt: "Write a Python function to calculate factorial"
Output:
- Response: "
def factorial(n):
if n == 0 or n == 1:
return 1
return n * factorial(n - 1)
"
Example 5: Using Options for Controlled Output
Inputs:
- Model: "llama3"
- Prompt: "Generate a product description for a wireless mouse"
- Options: {
"temperature": 0.3,
"num_predict": 150,
"repeat_penalty": 1.2
}
Output:
- Response: "This ergonomic wireless mouse features precise optical tracking..."