Generate Text
Generates text completions using OpenAI's language models with support for vision, chat history, tool calling, and streaming responses.
Common Properties
- Name - The custom name of the node.
- Color - The custom color of the node.
- Delay Before (sec) - Waits in seconds before executing the node.
- Delay After (sec) - Waits in seconds after executing node.
- Continue On Error - Automation will continue regardless of any error. The default value is false.
info
If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.
Inputs
- Connection Id - Connection identifier from Connect node. Optional if providing API Key directly.
- API Key - OpenAI API key credential. Optional if using Connection Id.
- System Prompt - Instructions that guide the AI's behavior and personality. Default: "You are a helpful assistant."
- User Prompt - The user's message or question to send to the AI model. Required.
- Chat History - Array of previous conversation messages for maintaining context across multiple interactions.
- Tool Message - Tool response message to include in the conversation (for function calling workflows).
- Image Url - URL or local file path to a single image for vision capabilities.
- Images - Array of image URLs or local file paths for multi-image analysis.
Options
Model Selection
- Model - Language model to use:
- GPT-5 - Latest and most capable model
- GPT-5 Mini - Fast and cost-effective GPT-5 variant
- GPT-5 Nano - Ultra-lightweight GPT-5 variant
- GPT-5 Pro - Enhanced GPT-5 for complex tasks
- GPT-5 Chat - Optimized for conversational tasks
- o3 - Advanced reasoning model
- o3 Mini - Compact reasoning model
- o4 Mini - Next-gen compact reasoning model
- GPT-4.1 - Latest GPT-4 generation
- GPT-4.1 Mini - Efficient GPT-4.1 variant
- GPT-4.1 Nano - Ultra-compact GPT-4.1
- GPT-4o - Multimodal model with vision
- GPT-4o Mini - Efficient multimodal model
- GPT-4o Audio Preview - Model with audio capabilities
- o1 - Reasoning model
- o1 Mini - Compact reasoning model
- o1 Pro - Professional reasoning model
- GPT-3.5 Turbo - Fast and affordable
- Custom Model - Specify a custom model name
- Custom Model - Custom model name when "Custom Model" is selected.
- Use Robomotion AI Credits - Use Robomotion credits instead of your own API key.
Generation Settings
- Number of Generations - Number of text responses to generate (1-4). Default: 1.
- Stream - Enable streaming response for real-time token generation. Default: false.
- JSON Mode - Force the model to output valid JSON. Default: false.
- Temperature - Controls randomness (0.0-2.0). Lower values are more focused and deterministic. Higher values are more creative.
- Top P - Alternative to temperature using nucleus sampling (0.0-1.0). Don't use both temperature and top_p.
- Max Tokens - Maximum number of tokens to generate in the response.
- Frequency Penalty - Penalizes tokens based on their frequency in the text so far (-2.0 to 2.0). Reduces repetition.
- Presence Penalty - Penalizes tokens based on whether they appear in the text so far (-2.0 to 2.0). Encourages topic diversity.
- Stop Sequences - Up to 4 sequences where the model will stop generating tokens.
Reasoning (o-series models)
- Reasoning Effort - Computational effort for reasoning models (o1, o3, o4-mini):
- Low - Faster, less thorough
- Medium - Balanced (default)
- High - Slower, more thorough
Vision Settings
- Image Detail - Image analysis detail level for vision models:
- Auto - Model decides based on image (default)
- High - Detailed analysis, higher cost
- Low - Basic analysis, lower cost
Tool Calling
- Tools Function - Function definition object with name, description, and parameters schema for tool calling.
- Tool Choice - Control which tool the model calls (auto, none, or specific function).
Advanced
- Seed - Random seed for reproducible outputs (Beta feature).
- User - Unique identifier for your end-user (for monitoring and abuse prevention).
- Timeout (seconds) - Request timeout in seconds. Default: 120.
- Include Raw Response - Include the full API response object in outputs. Default: false.
Outputs
- Text - Generated text response. Returns a string if Number of Generations is 1, or an array of strings if more than 1.
- Raw Response - Complete API response object (only set when "Include Raw Response" is enabled).
How It Works
The Generate Text node sends your prompt to OpenAI's language models. When executed:
- Validates the connection or API key
- Builds the conversation with system prompt, user prompt, and optional chat history
- Processes any images (encodes local files to base64 or uses URLs directly)
- Configures all generation parameters
- Sends the request to the selected model
- Returns the generated text response
Usage Examples
Example 1: Simple Text Generation
Input:
- User Prompt: "Write a professional email asking for a meeting next week"
- Model: gpt-4o-mini
Output:
- Text: "Subject: Meeting Request for Next Week..."
Example 2: Vision - Analyze an Image
Input:
- User Prompt: "What objects are in this image?"
- Image Url: "C:/screenshots/photo.jpg"
- Model: gpt-4o
Output:
- Text: "I can see a laptop, coffee mug, notebook, and pen on a wooden desk..."
Example 3: JSON Mode for Structured Output
Input:
- System Prompt: "Extract contact information as JSON"
- User Prompt: "John Doe, email: john@example.com, phone: 555-1234"
- JSON Mode: enabled
- Model: gpt-4o
Output:
- Text: {"name": "John Doe", "email": "john@example.com", "phone": "555-1234"}
Example 4: Multi-turn Conversation
Flow:
1. First call:
- User Prompt: "What is the capital of France?"
- Save response to msg.history
2. Second call:
- User Prompt: "What is its population?"
- Chat History: msg.history
- Model understands "its" refers to Paris
Example 5: Temperature Control
Creative writing (high temperature):
- Temperature: 1.5
- User Prompt: "Write a creative story opening"
- Result: More varied, creative responses
Factual extraction (low temperature):
- Temperature: 0.2
- User Prompt: "Extract the date from: Meeting on Jan 15"
- Result: More consistent, focused responses
Requirements
- Either a Connection Id from Connect node OR an API Key credential
- Non-empty User Prompt
- For vision: GPT-4o or GPT-4o-mini model
- For JSON mode: Compatible model (GPT-4o, GPT-4-turbo, etc.)
- For tool calling: Model with function calling support
- Local images must be in JPEG, PNG, WebP, or GIF format (max 20MB)
Error Handling
The node will return errors in these cases:
- ErrInvalidArg: User Prompt is empty, invalid model selection, or custom model name not provided
- ErrAPICall: OpenAI API errors (rate limits, invalid requests, model not available)
- File errors: Image file not found or unsupported format
- Timeout errors: Request exceeded timeout duration
Tips for RPA Developers
- Model Selection: Use gpt-4o-mini for most tasks - it's fast and affordable. Use GPT-5 or o3 for complex reasoning.
- Temperature: Use 0-0.3 for factual tasks, 0.7-1.0 for creative tasks, 1.5+ for very creative outputs.
- Token Management: Set Max Tokens to prevent excessive costs. Monitor usage via the raw response.
- Vision: Use "low" detail for simple image analysis to save costs. Use "high" for detailed analysis.
- Chat History: Build conversations by passing previous responses as chat history. Format:
[{role: "user", content: "..."}, {role: "assistant", content: "..."}] - JSON Mode: Always instruct the model to output JSON in your prompt when using JSON mode.
- Streaming: Enable for real-time UIs. Disable for batch processing.
- Error Handling: Wrap in Try-Catch blocks and handle rate limit errors with retries.
- Cost Optimization: Use lower-cost models for simple tasks, cache system prompts, and use max_tokens to limit costs.
Common Errors and Solutions
Error: "User Prompt cannot be empty"
- Solution: Ensure the User Prompt input has a value. It cannot be blank.
Error: "Custom Model cannot be empty"
- Solution: When "Custom Model" is selected, you must provide a model name in the Custom Model option.
Error: "Unsupported image format"
- Solution: Images must be JPEG, PNG, WebP, or GIF. Convert your image to a supported format.
Error: "OpenAI API error: rate limit exceeded"
- Solution: You've exceeded your API rate limit. Wait and retry, or upgrade your OpenAI plan.
Error: "Image file does not exist"
- Solution: Check that the image file path is correct and the file exists at that location.
Unexpected JSON output
- Solution: When using JSON Mode, make sure to explicitly ask for JSON in your prompt (e.g., "Return the data as JSON").
Model produces inconsistent results
- Solution: Lower the temperature (0-0.3) or set a seed value for more deterministic outputs.