Generate Text
Generates text using Google Vertex AI's text generation models.
Common Properties
- Name - The custom name of the node.
- Color - The custom color of the node.
- Delay Before (sec) - Waits in seconds before executing the node.
- Delay After (sec) - Waits in seconds after executing node.
- Continue On Error - Automation will continue regardless of any error. The default value is false.
info
If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.
Inputs
- Connection Id - The unique identifier of the connection to Vertex AI, typically obtained from the Connect node.
- Prompt - The prompt describing the text to be generated.
Options
- Temperature - Controls randomness in the response (0.0-1.0). Lower values make responses more deterministic.
- Max Output Tokens - Maximum number of tokens in the response. Default is 256. Range is 1-2048 for text-bison (latest) or 1-1024 for text-bison@001.
- TopK - Limits the number of highest probability vocabulary tokens considered at each step.
- TokP - Nucleus sampling probability threshold (0.0-1.0).
- Stop Sequence - Sequences where the API will stop generating further tokens.
- Candidate Count - Number of response candidates to generate. Default is 1.
- Model - The text model to use. Options include:
- Custom Model
- Text Bison@001
- Text Bison
- Text Bison 32k
- Default is Text Bison@001.
- Custom Model - Custom model name if "Custom Model" is selected.
- Locations - Google Cloud region for the service. Default is "us-central1".
- Publishers - Publisher of the model. Default is "google".
Output
- Response - The generated text from the Vertex AI model as an object.
How It Works
The Generate Text node sends a prompt to a Vertex AI text generation model and returns the model's text response. When executed, the node:
- Validates the connection ID and retrieves the authentication token
- Validates the required prompt input
- Collects all optional parameters and configurations
- Constructs a request with the prompt and parameters
- Sends the request to the Vertex AI text model endpoint
- Processes the response and returns it
Requirements
- A valid connection to Vertex AI established with the Connect node
- Valid Google Cloud credentials with appropriate permissions
- A properly configured Vertex AI text generation model
- A clear prompt describing the desired text
Error Handling
The node will return specific errors in the following cases:
- Empty or invalid Connection ID
- Empty prompt
- Invalid parameter values (e.g., temperature outside 0.0-1.0 range)
- Missing required parameters
- Invalid model selection
- Network connectivity issues
- Vertex AI service errors
- Authentication failures
Usage Notes
- The Connection ID must be obtained from a successful Connect node execution
- The Prompt should clearly describe what text you want to generate
- Temperature controls creativity: lower values (0.2) for factual responses, higher values (0.8) for creative responses
- Max Output Tokens limits response length
- TopK and TopP parameters control diversity of generated text
- Stop sequences can control where the model stops generating text
- Multiple candidates can be generated by increasing Candidate Count
- Different models offer different capabilities and token limits
- The Locations parameter should match where your Vertex AI resources are deployed
- Different models have different maximum token limits